Proceedings. Graphics Interface (Conference)最新文献

筛选
英文 中文
Multiwave: Complex Hand Gesture Recognition Using the Doppler Effect 基于多普勒效应的多波复杂手势识别
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.13
Corey R. Pittman, J. Laviola
{"title":"Multiwave: Complex Hand Gesture Recognition Using the Doppler Effect","authors":"Corey R. Pittman, J. Laviola","doi":"10.20380/GI2017.13","DOIUrl":"https://doi.org/10.20380/GI2017.13","url":null,"abstract":"We built an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of a speaker and microphone to be operational, but can be augmented with more speakers. Since these components are already included in most end user systems, our design makes gesture-based input more accessible to a wider range of end users. We are able to detect complex gestures by generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves. \u0000 \u0000We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report error rates comparable to or better than the current state of the art despite additional complexity. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional acoustic gesture recognition.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"97-106"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46622632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Tell Me More! Soliciting Reader Contributions to Software Tutorials 告诉我更多!征求读者对软件教程的贡献
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.03
P. Dubois, Volodymyr Dziubak, Andrea Bunt
{"title":"Tell Me More! Soliciting Reader Contributions to Software Tutorials","authors":"P. Dubois, Volodymyr Dziubak, Andrea Bunt","doi":"10.20380/GI2017.03","DOIUrl":"https://doi.org/10.20380/GI2017.03","url":null,"abstract":"Online software tutorials help a wide range of users acquireskills with complex software, but are not always easy to follow.For example, a tutorial might target users with a high skill level,or it might contain errors and omissions. Prior work has shownthat user contributions, such as user comments, can add value to atutorial. Building on this prior work, we investigate an approachto soliciting structured tutorial enhancements from tutorialreaders. We illustrate this approach through a prototype calledAntorial, and evaluate its impact on reader contributions through amulti-session study with 13 participants. Our findings suggest thatscaffolding tutorial contributions has positive impacts on both thenumber and type of reader contributions. Our findings also pointto design considerations for systems that aim to supportcommunity-based tutorial refinement, and suggest promisingdirections for future research.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"16-23"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46434924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Ballistic Shadow Art 弹道阴影艺术
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.24
Xiaozhong Chen, S. Andrews, D. Nowrouzezahrai, P. Kry
{"title":"Ballistic Shadow Art","authors":"Xiaozhong Chen, S. Andrews, D. Nowrouzezahrai, P. Kry","doi":"10.20380/GI2017.24","DOIUrl":"https://doi.org/10.20380/GI2017.24","url":null,"abstract":"We present a framework for generating animated shadow art using occluders under ballistic motion. We apply a stochastic optimization to find the parameters of a multi-body physics simulation that produce a desired shadow at a specific instant in time. We perform simulations across many different initial conditions, applying a set of carefully crafted energy functions to evaluate the motion trajectory and multi-body shadows. We select the optimal parameters, resulting in a ballistics simulation that produces ephemeral shadow art. Users can design physically-plausible dynamic artwork that would be extremely challenging if even possible to achieve manually. We present and analyze number of compelling examples.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"190-198"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44778799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Revectorization-Based Accurate Soft Shadow using Adaptive Area Light Source Sampling 基于自适应区域光源采样的反射精确软阴影
Proceedings. Graphics Interface (Conference) Pub Date : 2017-01-06 DOI: 10.20380/GI2017.23
Márcio C. F. Macedo, A. Apolinario
{"title":"Revectorization-Based Accurate Soft Shadow using Adaptive Area Light Source Sampling","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.20380/GI2017.23","DOIUrl":"https://doi.org/10.20380/GI2017.23","url":null,"abstract":"Physically-based accurate soft shadows are typically computed by the evaluation of a visibility function over several point light sources which approximate an area light source. This visibility evaluation is computationally expensive for hundreds of light source samples, providing performance far from real-time. One solution to reduce the computational cost of the visibility evaluation is to adaptively reduce the number of samples required to generate accurate soft shadows. Unfortunately, adaptive area light source sampling is prone to temporal incoherence, generation of banding artifacts and is slower than uniform sampling in some scene configurations. In this paper, we aim to solve these problems by the proposition of a revectorization-based accurate soft shadow algorithm. We take advantage of the improved accuracy obtained with the shadow revectorization to generate accurate soft shadows from a few light source samples, while producing temporally coherent soft shadows at interactive frame rates. Also, we propose an algorithm which restricts the costly accurate soft shadow evaluation for penumbra fragments only. The results obtained show that our approach is, in general, faster than the uniform sampling approach and is more accurate than the real-time soft shadow algorithms.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"181-189"},"PeriodicalIF":0.0,"publicationDate":"2017-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cut and Paint: Occlusion-Aware Subset Selection for Surface Processing 切割和油漆:表面处理的闭塞感知子集选择
Proceedings. Graphics Interface (Conference) Pub Date : 2017-01-01 DOI: 10.20380/GI2017.11
M. Radwan, S. Ohrhallinger, E. Eisemann, M. Wimmer
{"title":"Cut and Paint: Occlusion-Aware Subset Selection for Surface Processing","authors":"M. Radwan, S. Ohrhallinger, E. Eisemann, M. Wimmer","doi":"10.20380/GI2017.11","DOIUrl":"https://doi.org/10.20380/GI2017.11","url":null,"abstract":"Surface selection operations by a user are fundamental for many applications and a standard tool in mesh editing software. Unfortunately, defining a selection is only straightforward if the region is visible and on a convex model. Concave surfaces can exhibit self-occlusions, which require using multiple camera positions to obtain unobstructed views. The process thus becomes iterative and cumbersome. Our novel approach enables selections to lie under occlusions and even on the backside of objects and for arbitrary depth complexity at interactive rates. We rely on a user-drawn curve in screen space, which is projected onto the mesh and analyzed with respect to visibility to guarantee a continuous path on the surface. Our occlusion-aware surface-processing method enables a number of applications in an easy way. As examples, we show continuous painting on the surface, selecting regions for texturing, creating illustrative cutaways from nested models and animate them.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"4 1","pages":"82-89"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88605051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter Aligned Trimmed Surfaces 参数对齐修剪曲面
Proceedings. Graphics Interface (Conference) Pub Date : 2017-01-01 DOI: 10.20380/GI2017.12
S. Halbert, F. Samavati, Adam Runions
{"title":"Parameter Aligned Trimmed Surfaces","authors":"S. Halbert, F. Samavati, Adam Runions","doi":"10.20380/GI2017.12","DOIUrl":"https://doi.org/10.20380/GI2017.12","url":null,"abstract":"We present a new representation for trimmed parametric surfaces. Given a set of trimming curves in the parametric domain of a surface, our method locally reparametrizes the parameter space to permit accurate representation of these features without partitioning the surface into subsurfaces. Instead, the parameter space is segmented into subspaces containing the trimming curves, the boundaries of which are aligned to the local parameter axes. When multiple trimming curves are present, intersecting subspaces are further segmented using local Voronoı̈ curve diagrams which allows the subspace to be distributed equally between the trimming curves. Transition patches are then used to reparametrize the areas around the trimming curves to accommodate the trimmed edges. This allows for high quality interpolation of the trimmed edges while still allowing parametric referencing and trimmed surface sampling.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"23 1","pages":"90-96"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90909121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Conversation with the CHCCS/SCDHM 2016 Achievement Award Winner 与CHCCS/SCDHM 2016成就奖得主的对话
Proceedings. Graphics Interface (Conference) Pub Date : 2016-06-01 DOI: 10.20380/GI2016.01
M. V. D. Panne, P. Kry
{"title":"A Conversation with the CHCCS/SCDHM 2016 Achievement Award Winner","authors":"M. V. D. Panne, P. Kry","doi":"10.20380/GI2016.01","DOIUrl":"https://doi.org/10.20380/GI2016.01","url":null,"abstract":"This paper constitutes the invited publication that CHCCS extends to the Achievement award winner. This year, we experiment with a new interview format, which permits a casual discussion of the research area, insights, and contributions of the award winner. What follows is an edited version of a conversation that took place on April 7, 2016, via Google Hangouts.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"16 1","pages":"1-3"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78569366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RealFusion: An Interactive Workflow for Repurposing Real-World Objects towards Early-stage Creative Ideation RealFusion:用于将现实世界对象重新用于早期创意的交互式工作流程
Proceedings. Graphics Interface (Conference) Pub Date : 2016-06-01 DOI: 10.20380/GI2016.11
Cecil Piya, Vinayak Vinayak, Yunbo Zhang, K. Ramani
{"title":"RealFusion: An Interactive Workflow for Repurposing Real-World Objects towards Early-stage Creative Ideation","authors":"Cecil Piya, Vinayak Vinayak, Yunbo Zhang, K. Ramani","doi":"10.20380/GI2016.11","DOIUrl":"https://doi.org/10.20380/GI2016.11","url":null,"abstract":"We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"34 1","pages":"85-92"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87528076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Capturing Spatially Varying Anisotropic Reflectance Parameters using Fourier Analysis 利用傅里叶分析捕获空间变化的各向异性反射参数
Proceedings. Graphics Interface (Conference) Pub Date : 2016-06-01 DOI: 10.20380/GI2016.09
Alban Fichet, Imari Sato, Nicolas Holzschuch
{"title":"Capturing Spatially Varying Anisotropic Reflectance Parameters using Fourier Analysis","authors":"Alban Fichet, Imari Sato, Nicolas Holzschuch","doi":"10.20380/GI2016.09","DOIUrl":"https://doi.org/10.20380/GI2016.09","url":null,"abstract":"Reflectance parameters condition the appearance of objects in photorealistic rendering. Practical acquisition of reflectance parameters is still a difficult problem. Even more so for spatially varying or anisotropic materials, which increase the number of samples required. In this paper, we present an algorithm for acquisition of spatially varying anisotropic materials, sampling only a small number of directions. Our algorithm uses Fourier analysis to extract the material parameters from a sub-sampled signal. We are able to extract diffuse and specular reflectance, direction of anisotropy, surface normal and reflectance parameters from as little as 20 sample directions. Our system makes no assumption about the stationarity or regularity of the materials, and can recover anisotropic effects at the pixel level.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"26 1","pages":"65-73"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88230039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation 阅读点之间:结合3D标记和FACS分类高质量的混合形状面部动画
Proceedings. Graphics Interface (Conference) Pub Date : 2016-06-01 DOI: 10.20380/GI2016.18
Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker
{"title":"Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation","authors":"Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker","doi":"10.20380/GI2016.18","DOIUrl":"https://doi.org/10.20380/GI2016.18","url":null,"abstract":"Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"8 1","pages":"143-151"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85418774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信