ACM SIGGRAPH 2019 Posters最新文献

筛选
英文 中文
Exploration of using face tracking to reduce GPU rendering on current and future auto-stereoscopic displays 探索在当前和未来的自动立体显示器上使用人脸跟踪来减少 GPU 渲染
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338577
Xingyu Pan, Mengya Zheng, A. Campbell
{"title":"Exploration of using face tracking to reduce GPU rendering on current and future auto-stereoscopic displays","authors":"Xingyu Pan, Mengya Zheng, A. Campbell","doi":"10.1145/3306214.3338577","DOIUrl":"https://doi.org/10.1145/3306214.3338577","url":null,"abstract":"Future auto-stereoscopic displays offer us an amazing possibility of virtual reality without the need for head mounted displays. Since fundamentally though we only need to generate viewpoints for known observers, the classical approach to render all views at once is wasteful in terms of GPU resources and limits the scale of an auto-stereoscopic display. We present a technique that reduces GPU consumption when using an auto-stereoscopic displays by giving the display a context awareness of its observers. The technique was first applied to the Looking Glass device on the Unity3D platform. Rather than rendering 45 different views at the same time, for each observer, the framework only requires six views that are visible to both eyes based on the tracked eye positions. Given the current specifications of this device, the framework helps save 73% GPU consumption for Looking Glass if it was to render a 8K X 8K resolution scene, and the saved GPU consumption increases as the resolution increases. This technique can be applied to reduce future GPU requirements for auto-stereoscopic displays in the future.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130961661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Remote control experiment with displaybowl and 360-degree video 远程控制实验显示碗和360度视频
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338568
Shio Miyafuji, Soichiro Toyohara, Toshiki Sato, H. Koike
{"title":"Remote control experiment with displaybowl and 360-degree video","authors":"Shio Miyafuji, Soichiro Toyohara, Toshiki Sato, H. Koike","doi":"10.1145/3306214.3338568","DOIUrl":"https://doi.org/10.1145/3306214.3338568","url":null,"abstract":"DisplayBowl is a bowl-shaped hemispherical display for showing omnidirectional images with direction data. It provides users with a novel way of observing 360-degree video streams, which improves the awareness of the surroundings when operating a remote-controlled vehicle compared to conventional flat displays and HMDs. In this paper, we present a user study, in which we asked participants to control a remote drone using an omnidirectional video streaming, to compare the uniqueness and advantages of three displays: a flat panel display, a head-mounted display and DisplayBowl.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131679819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid-measurement technology using flow birefringence of nanocellulose 利用纳米纤维素流动双折射的流体测量技术
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338565
Shogo Yamashita, T. Kasuga, Shunichi Suwa, Takashi Miyaki, M. Nogi, J. Rekimoto
{"title":"Fluid-measurement technology using flow birefringence of nanocellulose","authors":"Shogo Yamashita, T. Kasuga, Shunichi Suwa, Takashi Miyaki, M. Nogi, J. Rekimoto","doi":"10.1145/3306214.3338565","DOIUrl":"https://doi.org/10.1145/3306214.3338565","url":null,"abstract":"We propose a potential fluid-measurement technology aimed at supporting biomechanics research of water sports using fluid simulation and motion analysis. Cellulose nanofibers introduced into the water as tracer particles to visualize the movement of water. An optical property of nanofibers, called flow birefringence, makes water flows brighter than their surroundings when placed between right and left circularly polarized plates. We tested the capability of the technology in a water tank and succeeded in using an existing particle-tracking method-particle image velocimetry (PIV)-to measure the flows from a pump in the tank.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115479449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neck strap haptics: an algorithm for non-visible VR information using haptic perception on the neck 颈带触觉:一种利用颈部触觉感知非可见VR信息的算法
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338562
Yusuke Yamazaki, S. Hasegawa, Hironori Mitake, Akihiko Shirai
{"title":"Neck strap haptics: an algorithm for non-visible VR information using haptic perception on the neck","authors":"Yusuke Yamazaki, S. Hasegawa, Hironori Mitake, Akihiko Shirai","doi":"10.1145/3306214.3338562","DOIUrl":"https://doi.org/10.1145/3306214.3338562","url":null,"abstract":"In this poster, we propose a new haptic rendering algorithm that dynamically modulates wave parameters to convey distance, direction, and object type by utilizing neck perception and the Hapbeat-Duo, a haptic device composed of two actuators linked by a neck strap. This method is useful for various VR use cases because it provides feedback without disturbing users' movement. In our experiment, we presented haptic feedback of sine waves which were dynamically modulated according to direction and distance between a player and a target. These waves were presented to both sides of the users' necks independently. As a result, players could reach invisible targets and immediately know they had reached the targets. The proposed algorithm allows the neck to become as important a receptive part of body as eyes, ears, and hands.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114725427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MagicPAPER: tabletop interactive projection device based on tangible interaction MagicPAPER:基于有形交互的桌面交互式投影设备
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338575
Qin Wu, Jiayuan Wang, Sirui Wang, Tong Su, Chenmei Yu
{"title":"MagicPAPER: tabletop interactive projection device based on tangible interaction","authors":"Qin Wu, Jiayuan Wang, Sirui Wang, Tong Su, Chenmei Yu","doi":"10.1145/3306214.3338575","DOIUrl":"https://doi.org/10.1145/3306214.3338575","url":null,"abstract":"This study proposes a tabletop projection device that can be implemented by combining physical objects with interactive projections. Users can interact on kraft papers using daily tools, such as marker pens, toothbrushes, colored blocks, and square wooden blocks. The input of the proposed device is a multifunction sensor, and the output is a tabletop projector. Using MagicPAPER, four types of interactions are implemented, namely drawing, gesture recognition, brushing, and building blocks. The abstract and poster discuss the design motivations and system descriptions of MagicPAPER.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132087278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Unsupervised incremental learning for hand shape and pose estimation 手部形状和姿态估计的无监督增量学习
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338553
Pratik Kalshetti, P. Chaudhuri
{"title":"Unsupervised incremental learning for hand shape and pose estimation","authors":"Pratik Kalshetti, P. Chaudhuri","doi":"10.1145/3306214.3338553","DOIUrl":"https://doi.org/10.1145/3306214.3338553","url":null,"abstract":"We present an unsupervised incremental learning method for refining hand shape and pose estimation. We propose a refiner network (RefNet) that can augment a state-of-the-art hand tracking system (BaseNet) by refining its estimations on unlabeled data. At each input depth frame, the estimations from the BaseNet are iteratively refined by RefNet using a model-fitting strategy. During this process, the RefNet adapts to the input data characteristics by incremental learning. We show that our method provides more accurate hand shape and pose estimates on both a standard dataset and real data.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129237377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fully automatic colorization for anime character considering accurate eye colors 全自动着色动画人物考虑准确的眼睛颜色
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338585
Kenta Akita, Yuki Morimoto, R. Tsuruno
{"title":"Fully automatic colorization for anime character considering accurate eye colors","authors":"Kenta Akita, Yuki Morimoto, R. Tsuruno","doi":"10.1145/3306214.3338585","DOIUrl":"https://doi.org/10.1145/3306214.3338585","url":null,"abstract":"In this paper, we propose a method to colorize line drawings of anime characters' faces with colors from a reference image. Previous studies using reference images often fail to realize fully-automatic colorization, especially for small areas, e.g., eye colors in the resulting image may differ from the reference image. The proposed method accurately colorizes eyes in the input line drawing using automatically computed hints. The hints are round patches used to specify the positions and corresponding colors extracted from the eye areas of a reference image.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121759408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A procedural approach to creating second empire houses 创建第二帝国住宅的程序方法
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338549
M. Kramer, E. Akleman
{"title":"A procedural approach to creating second empire houses","authors":"M. Kramer, E. Akleman","doi":"10.1145/3306214.3338549","DOIUrl":"https://doi.org/10.1145/3306214.3338549","url":null,"abstract":"In this work, we present a procedural approach to capture a variety of appearances of American Second Empire houses. To develop this procedural approach, we have identified the set of rules and similarities of Second Empire houses. Our procedural approach, therefore, captures the style differences of Second Empire houses with a relatively few numbers of parameters. Using our interface, we are able to generate virtual houses in a wide variety of styles of American Second Empire architecture. We have also developed a method to break up these virtual models into slices in order to efficiently and economically 3D print them. Using this approach we have created miniatures of two landmark buildings: the Hamilton-Turner Inn in Savannah and the Enoch Pratt House in Baltimore. Note that the virtual models still provide more details because of the limited resolution of 3D printing processes.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127624161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Interactive virtual reality orchestral music 交互式虚拟现实管弦乐
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338547
Yanxiang Zhang, Li Tao, Yirun Shen, Clayton Elieisar, Fangbemi Abassin
{"title":"Interactive virtual reality orchestral music","authors":"Yanxiang Zhang, Li Tao, Yirun Shen, Clayton Elieisar, Fangbemi Abassin","doi":"10.1145/3306214.3338547","DOIUrl":"https://doi.org/10.1145/3306214.3338547","url":null,"abstract":"The authors developed a VR orchestral application for interactive music experience, allowing virtual musical instruments in an orchestral piece to be repositioned spatially, dynamically and interactively in VR space. This can be done for changing environments where 3D audio technology is used to restructure traditional orchestral pieces into a new music art form. User experience surveys were undertaken on two kinds of users, with the results showing that the VR orchestral system developed in this paper could bring some special advantages in the musical experience.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114265063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring color variations for vector graphics 探索矢量图形的颜色变化
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338552
Sayan Ghosh, J. Echevarria, Vineet Batra, Ankit Phogat
{"title":"Exploring color variations for vector graphics","authors":"Sayan Ghosh, J. Echevarria, Vineet Batra, Ankit Phogat","doi":"10.1145/3306214.3338552","DOIUrl":"https://doi.org/10.1145/3306214.3338552","url":null,"abstract":"We propose a novel and intuitive method for exploring recoloring variations of vector graphics. Compared with existing methods, ours is specifically tailored for vector graphics, where color distributions are sparser and are explicitly stored using constructs like solid colors or gradients, independent from other semantical and spatial relationships. Our method tries to infer some of them before formulating color transfer as a transport problem between the weighted color distributions of the reference and the target vector graphics. We enable creative exploration by providing fine-grain control over the resulting transfer, allowing users to modify relative color distributions in real-time.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115398067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信