SIGGRAPH Asia 2019 Posters最新文献

筛选
英文 中文
Pop-up digital tabletop: seamless integration of 2D and 3D visualizations in a tabletop environment 弹出式数字桌面:在桌面环境中无缝集成2D和3D可视化
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364571
Daisuke Inagaki, Yucheng Qiu, Raku Egawa, Takashi Ijiri
{"title":"Pop-up digital tabletop: seamless integration of 2D and 3D visualizations in a tabletop environment","authors":"Daisuke Inagaki, Yucheng Qiu, Raku Egawa, Takashi Ijiri","doi":"10.1145/3355056.3364571","DOIUrl":"https://doi.org/10.1145/3355056.3364571","url":null,"abstract":"We propose a pop-up digital tabletop system that seamlessly integrates two-dimensional (2D) and three-dimensional (3D) representations of contents in a digital tabletop environment. By combining a digital tabletop display of 2D contents with a light-field display, we can visualize a part of the 2D contents in 3D. Users of our system can overview the contents in their 2D representation, then observe a detail of the contents in the 3D visualization. The feasibility of our system is demonstrated on two applications, one for browsing cityscapes, the other for viewing insect specimens.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"21 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115707194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AUDIOZOOM: Location Based Sound Delivery system AUDIOZOOM:基于位置的声音传送系统
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364596
Chinmay Rajguru, Daniel Blaszczak, A. Pouryazdan, T. J. Graham, G. Memoli
{"title":"AUDIOZOOM: Location Based Sound Delivery system","authors":"Chinmay Rajguru, Daniel Blaszczak, A. Pouryazdan, T. J. Graham, G. Memoli","doi":"10.1145/3355056.3364596","DOIUrl":"https://doi.org/10.1145/3355056.3364596","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"268 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123112060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Midair Haptic Representation for Internal Structure in Volumetric Data Visualization 体数据可视化中内部结构的空中触觉表示
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364584
T. Takashina, Mitsuru Ito, Yuji Kokumai
{"title":"Midair Haptic Representation for Internal Structure in Volumetric Data Visualization","authors":"T. Takashina, Mitsuru Ito, Yuji Kokumai","doi":"10.1145/3355056.3364584","DOIUrl":"https://doi.org/10.1145/3355056.3364584","url":null,"abstract":"In this paper, we propose a method to perceive the internal structure of volumetric data using midair haptics. In this method, we render haptic stimuli using a Gaussian mixture model to approximate the internal structure of the volumetric data. The user’s hand is tracked by a sensor and is represented in a virtual space. Users can touch volumetric data with virtual hands. The focal points of the ultrasound phased arrays for presenting the sense of touch are determined from the the position of the user’s hand and the contact point of the virtual hand on the volumetric data. These haptic cues allow the user to directly perceive the sensation of touching the inside of the volumetric data. Our proposal is a solution for the occlusion problem in volumetric data visualization.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123275476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Wavelet Energy Decomposition Signature for Robust Non-Rigid Shape Matching 一种鲁棒非刚性形状匹配的小波能量分解特征
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364556
Yiqun Wang, Jianwei Guo, Dongming Yan, Xiaopeng Zhang
{"title":"A Wavelet Energy Decomposition Signature for Robust Non-Rigid Shape Matching","authors":"Yiqun Wang, Jianwei Guo, Dongming Yan, Xiaopeng Zhang","doi":"10.1145/3355056.3364556","DOIUrl":"https://doi.org/10.1145/3355056.3364556","url":null,"abstract":"We present a novel local shape descriptor, named wavelet energy decomposition signature (WEDS) for robustly matching non-rigid 3D shapes with different resolutions. The local shape descriptors are generated by decomposing Dirichlet energy on the input triangular mesh. Our approach can be either applied directly or used as the input to other learning-based approaches. Experimental results show that the proposed WEDS achieves promising results on shape matching tasks in terms of incompatible shape structures.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123595085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User-friendly Interior Design Recommendation 人性化室内设计建议
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364562
Akari Nishikawa, K. Ono, M. Miki
{"title":"User-friendly Interior Design Recommendation","authors":"Akari Nishikawa, K. Ono, M. Miki","doi":"10.1145/3355056.3364562","DOIUrl":"https://doi.org/10.1145/3355056.3364562","url":null,"abstract":"We propose a novel search engine that recommends a combination of furniture preferred by a user based on image features. In recent years, research on furniture search engines has attracted attention with the development of deep learning techniques. However, existing search engines mainly focus on the techniques of extracting similar furniture (items), and few studies have considered interior combinations. Even techniques that consider the combination do not take into account the preference of each user. They make recommendations based on the text data attached to the image and do not incorporate a judgmental mechanism based on differences in individual preference such as the shape and color of furniture. Thus, in this study, we propose a method that recommends items that match the selected item for each individual based on individual preference by analyzing images selected by the user and automatically creating a rule for a combination of furniture based on the proposed features.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123037234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fundus imaging using DCRA toward large eyebox 采用DCRA对大眼箱进行眼底成像
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364579
Yuichi Atarashi, Kazuki Otao, Takahito Aoto, Yoichi Ochiai
{"title":"Fundus imaging using DCRA toward large eyebox","authors":"Yuichi Atarashi, Kazuki Otao, Takahito Aoto, Yoichi Ochiai","doi":"10.1145/3355056.3364579","DOIUrl":"https://doi.org/10.1145/3355056.3364579","url":null,"abstract":"We propose a novel fundus imaging system using a dihedral corner reflector array (DCRA) that is an optical component to work as a lens but does not have a focal length or an optical axis. A DCRA has a feature that transfers a light source into a plane symmetric point. Conventionally, using this feature, a DCRA has been used to many display applications, such as virtual retinal display and three-dimensional display, in the field of computer graphics. On the other hand, as a sensing application, we use a DCRA for setting a virtual camera in/on an eyeball to capture a fundus. The proposed system has three features; (1) robust to eye movement, (2) wavelength-independent, (3) a simple optical system. In the experiments, the proposed system achieves 8 mm of large eyebox. The proposed system has a possibility to be applied to preventive medicine for households that can be used in daily life.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128275998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye-Tracking Based Adaptive Parallel Coordinates 基于自适应平行坐标的眼动追踪
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364563
Mohammad Chegini, K. Andrews, T. Schreck, A. Sourin
{"title":"Eye-Tracking Based Adaptive Parallel Coordinates","authors":"Mohammad Chegini, K. Andrews, T. Schreck, A. Sourin","doi":"10.1145/3355056.3364563","DOIUrl":"https://doi.org/10.1145/3355056.3364563","url":null,"abstract":"Parallel coordinates is a well-known technique for visual analysis of high-dimensional data. Although it is effective for interactive discovery of patterns in subsets of dimensions and data records, it also has scalability issues for large datasets. In particular, the amount of visual information potentially being shown in a parallel coordinates plot grows combinatorially with the number of dimensions. Choosing the right ordering of axes is crucial, and poor design can lead to visual noise and a cluttered plot. In this case, the user may overlook a significant pattern, or leave some dimensions unexplored. In this work, we demonstrate how eye-tracking can help an analyst efficiently and effectively reorder the axes in a parallel coordinates plot. Implicit input from an inexpensive eye-tracker assists the system in finding unexplored dimensions. Using this information, the system guides the user either visually or automatically to find further appropriate orderings of the axes.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computational Spectral-Depth Imaging with a Compact System 计算光谱深度成像与一个紧凑的系统
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364570
Mingde Yao, Zhiwei Xiong, Lizhi Wang, Dong Liu, Xuejin Chen
{"title":"Computational Spectral-Depth Imaging with a Compact System","authors":"Mingde Yao, Zhiwei Xiong, Lizhi Wang, Dong Liu, Xuejin Chen","doi":"10.1145/3355056.3364570","DOIUrl":"https://doi.org/10.1145/3355056.3364570","url":null,"abstract":"In this paper, a compact imaging system is developed to enable simultaneous acquisition of the spectral and depth information in real-time with high resolution. We achieve this goal using only two commercial cameras and relying on an efficient computational reconstruction algorithm with deep learning. For the first time, this work allows 5D information (3D space + 1D spectrum + 1D time) of the target scene to be captured with a miniaturized apparatus and without active illumination.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131135068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Color-Based Edge Detection on Mesh Surface 基于颜色的网格表面边缘检测
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364580
Yi-Jheng Huang
{"title":"Color-Based Edge Detection on Mesh Surface","authors":"Yi-Jheng Huang","doi":"10.1145/3355056.3364580","DOIUrl":"https://doi.org/10.1145/3355056.3364580","url":null,"abstract":"1 ABSTRACT Edge detection is one of the fundamental techniques and can be applied in many places. We propose an algorithm for detecting edges based on the color of a mesh surface. To the best of our knowledge, we are the first to detect edges on mesh surface based on the color of the mesh surface. The basic idea of our method is to compute color gradient magnitudes of a mesh surface. To do that, the mesh is split along the intersection of surfaces into some segments. Then, segments are voxelized and assigned a representative color by averaging colors at boundaries between voxels and mesh faces. Artificial neighbors are created for completeness and 3D canny edge detection is applied to the resulting 3D representation. Lastly, additional intersections are added by looking at the intersection of two surfaces. Figure 1 shows the results of our method.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116919650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Method to Create Fluttering Hair Animations That Can Reproduce Animator’s Techniques 一种方法来创建飘扬的头发动画,可以重现动画师的技术
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364582
Naoaki Kataoka, Tomokazu Ishikawa, I. Matsuda
{"title":"A Method to Create Fluttering Hair Animations That Can Reproduce Animator’s Techniques","authors":"Naoaki Kataoka, Tomokazu Ishikawa, I. Matsuda","doi":"10.1145/3355056.3364582","DOIUrl":"https://doi.org/10.1145/3355056.3364582","url":null,"abstract":"We propose a method based on an animator’s technique to create an animation for fluttering objects in the wind such as hair and flag. As a preliminary study, we analyzed how the fluttering objects were expressed in the hand-drawn animations, and confirmed that there is a traditional technique commonly used by professional animators. In the case of hair, for example, the tip of the hair is often moved in the shape of a figure eight, and the remaining hair bundle is animated as if a wave caused by this movement is propagating along the hair. Based on this observation, we developed a system to reproduce this technique digitally. In this system, trajectories of a few control points on a hair bone are sketched by a user, and their motion are propagated to the whole hair bundle to represent the waving behavior. In this process, user can interactively adjust two parameters on swing speed and wave propagation delay. As system evaluation, we conducted a user test in which a sample animation was reproduced by several subjects using our system.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信