Proceedings. Graphics Interface (Conference)最新文献

筛选
英文 中文
Reading Small Scalar Data Fields: Color Scales vs. Detail on Demand vs. FatFonts 读取小标量数据字段:颜色比例、按需细节、字体
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.07
Constant Manteau, Miguel A. Nacenta, Michael Mauderer
{"title":"Reading Small Scalar Data Fields: Color Scales vs. Detail on Demand vs. FatFonts","authors":"Constant Manteau, Miguel A. Nacenta, Michael Mauderer","doi":"10.20380/GI2017.07","DOIUrl":"https://doi.org/10.20380/GI2017.07","url":null,"abstract":"We empirically investigate the advantages and disadvantages of color and digit-based methods to represent small scalar fields. We compare two types of color scales (one brightness-based and one that varies in hue, saturation and brightness) with an interactive tooltip that shows the scalar value on demand, and with a symbolic glyph-based approach (FatFonts). Three experiments tested three tasks: reading values, comparing values, and finding extrema. The results provide the first empirical comparisons of color scales with symbol-based techniques. The interactive tooltip enabled higher accuracy and shorter times than the color scales for reading values but showed slow completion times and low accuracy for value comparison and extrema finding tasks. The FatFonts technique showed better speed and accuracy for reading and value comparison, and high accuracy for the extrema finding task at the cost of being the slowest for this task.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"50-56"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42719833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
No Need to Stop What You're Doing: Exploring No-Handed Smartwatch Interaction 无需停止你正在做的事情:探索无手智能手表交互
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.14
Seongkook Heo, M. Annett, B. Lafreniere, Tovi Grossman, G. Fitzmaurice
{"title":"No Need to Stop What You're Doing: Exploring No-Handed Smartwatch Interaction","authors":"Seongkook Heo, M. Annett, B. Lafreniere, Tovi Grossman, G. Fitzmaurice","doi":"10.20380/GI2017.14","DOIUrl":"https://doi.org/10.20380/GI2017.14","url":null,"abstract":"Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or both hands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"107-114"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45568114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
FLOWPAK: Flow-based Ornamental Element Packing FLOWPAK:基于流量的装饰元素包装
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.02
Reza Adhitya Saputra, C. Kaplan, P. Asente, R. Mech
{"title":"FLOWPAK: Flow-based Ornamental Element Packing","authors":"Reza Adhitya Saputra, C. Kaplan, P. Asente, R. Mech","doi":"10.20380/GI2017.02","DOIUrl":"https://doi.org/10.20380/GI2017.02","url":null,"abstract":"We present a technique for drawing ornamental designs consisting of placed instances of simple shapes. These shapes, which we call elements, are selected from a small library of templates. The elements are deformed to flow along a direction field interpolated from user-supplied strokes, giving a sense of visual flow to the final composition, and constrained to lie within a container region. Our implementation computes a vector field based on user strokes, constructs streamlines that conform to the vector field, and places an element over each streamline. An iterative refinement process then shifts and stretches the elements to improve the composition.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"8-15"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47720517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Merging Sketches for Creative Design Exploration: An Evaluation of Physical and Cognitive Operations 合并草图的创意设计探索:物理和认知操作的评估
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.15
Senthil K. Chandrasegaran, Sriram Karthik Badam, Ninger Zhou, Zhenpeng Zhao, Lorraine G. Kisselburgh, K. Peppler, N. Elmqvist, K. Ramani
{"title":"Merging Sketches for Creative Design Exploration: An Evaluation of Physical and Cognitive Operations","authors":"Senthil K. Chandrasegaran, Sriram Karthik Badam, Ninger Zhou, Zhenpeng Zhao, Lorraine G. Kisselburgh, K. Peppler, N. Elmqvist, K. Ramani","doi":"10.20380/GI2017.15","DOIUrl":"https://doi.org/10.20380/GI2017.15","url":null,"abstract":"Despite its grounding in creativity techniques, merging multiple source sketches to create new ideas has received scant attention in design literature. In this paper, we identify the physical operations that in merging sketch components. We also introduce cognitive operations of reuse, repurpose, refactor, and reinterpret, and explore their relevance to creative design. To examine the relationship of cognitive operations, physical techniques, and creative sketch outcomes, we conducted a qualitative user study where student designers merged existing sketches to generate either an alternative design, or an unrelated new design. We compared two digital selection techniques: freeform selection, and a stroke-cluster-based “object select” technique. The resulting merge sketches were subjected to crowdsourced evaluation of these sketches, and manual coding for the use of cognitive operations. Our findings establish a firm connection between the proposed cognitive operations and the context and outcome of creative tasks. Key findings indicate that reinterpret cognitive operations correlate strongly with creativity in merged sketches, while reuse operations correlate negatively with creativity. Furthermore, freeform selection techniques are preferred significantly by designers. We discuss the empirical contributions of understanding the use of cognitive operations during design exploration, and the practical implications for designing interfaces in digital tools that facilitate creativity in merging sketches.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"115-123"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41768575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Multi-touch Contact Size for Z-Axis Movement in 3D Environments 探索3D环境中Z轴移动的多点触摸触点尺寸
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.09
S. Holderness, Jared N. Bott, P. Wisniewski, J. Laviola
{"title":"Exploring Multi-touch Contact Size for Z-Axis Movement in 3D Environments","authors":"S. Holderness, Jared N. Bott, P. Wisniewski, J. Laviola","doi":"10.20380/GI2017.09","DOIUrl":"https://doi.org/10.20380/GI2017.09","url":null,"abstract":"In this paper we examine two methods for using relative contact size as an interaction technique for 3D environments on multi-touch capacitive touch screens. We refer to interpreting relative contact size changes as “pressure” simulation. We conducted a 2 x 2 within subjects experimental design using two methods for pressure estimation (calibrated and comparative) and two different 3D tasks (bidirectional and unidirectional). Calibrated pressure estimation was based upon a calibration session, whereas comparative pressure estimation was based upon the contact size of each initial touch. The bidirectional task was guiding a ball through a hoop, while the unidirectional task involved using pressure to rotate a stove knob. Results indicate that the preferred and best performing pressure estimation technique was dependent on the 3D task. For the bidirectional task, calibrated pressure performed significantly better, while the comparative method performed better for the unidirectional task. We discuss the implications and future research directions based on our findings.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"65-73"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41353240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trade-offs Between a Vertical Shared Display and Two Desktops in a Collaborative Path-Finding Task 协同寻径任务中垂直共享显示器和两个桌面之间的权衡
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.27
Arnaud Prouzeau, A. Bezerianos, O. Chapuis
{"title":"Trade-offs Between a Vertical Shared Display and Two Desktops in a Collaborative Path-Finding Task","authors":"Arnaud Prouzeau, A. Bezerianos, O. Chapuis","doi":"10.20380/GI2017.27","DOIUrl":"https://doi.org/10.20380/GI2017.27","url":null,"abstract":"Large vertical displays are considered well adapted for collaboration, due to their display surface and the space in front of them that can accommodate multiple people. However, there are few studies that empirically support this assertion, and they do not quantitatively assess the differences of collaboration in front of a shared display compared to a non-shared setup, such as multiple desktops with a common view. In this paper, we compare a large shared vertical display with two desktops, when pairs of users learn to perform a path-planning task. Our results did not indicate a significant difference in learning between the two setups, but found that participants adopted different task strategies. Moreover, while pairs were overall faster with the two desktops, quality was more consistent in the vertical shared display where pairs spent more time communicating, even though there is a-priori more implicit collaboration in this setup.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"214-219"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41437707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Pattern formation through minimalist biologically inspired cellular simulation 模式形成通过极简的生物启发细胞模拟
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.19
M. Malheiros, M. Walter
{"title":"Pattern formation through minimalist biologically inspired cellular simulation","authors":"M. Malheiros, M. Walter","doi":"10.20380/GI2017.19","DOIUrl":"https://doi.org/10.20380/GI2017.19","url":null,"abstract":"This paper describes a novel model for coupling continuous chemical diffusion and discrete cellular events inside a biologically inspired simulation environment. Our goal is to define and explore a minimalist set of features that are also expressive, enabling the creation of complex and plausible 2D patterns using just a few rules. By not being constrained into a static or regular grid, we show that many different phenomena can be simulated, such as traditional reaction-diffusion systems, cellular automata, and pigmentation patterns from living beings. In particular, we demonstrate that adding chemical saturation increases significantly the range of simulated patterns using reaction-diffusion, including patterns not possible before such as the leopard rosettes. Our results suggest a possible universal model that can integrate previous pattern formation approaches, providing new ground for experimentation, and realistic-looking textures for general use in Computer Graphics.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"148-155"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42926741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Generating Calligraphic Trajectories with Model Predictive Control 用模型预测控制生成书法轨迹
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.17
Daniel Berio, S. Calinon, F. Leymarie
{"title":"Generating Calligraphic Trajectories with Model Predictive Control","authors":"Daniel Berio, S. Calinon, F. Leymarie","doi":"10.20380/GI2017.17","DOIUrl":"https://doi.org/10.20380/GI2017.17","url":null,"abstract":"We describe a methodology for the interactive definition of curves and motion paths using a stochastic formulation of optimal control. We demonstrate how the same optimization framework can be used in different ways to generate curves and traces that are geometrically and dynamically similar to the ones that can be seen in art forms such as calligraphy or graffiti art. The method provides a probabilistic description of trajectories that can be edited similarly to the control polygon typically used in the popular spline based methods. Furthermore, it also encapsulates movement kinematics, deformations and variability. The user is then provided with a simple interactive interface that can generate multiple movements and traces at once, by visually defining a distribution of trajectories rather than a single one. The input to our method is a sparse sequence of targets defined as multivariate Gaussians. The output is a dynamical system generating curves that are natural looking and reflect the kinematics of a movement, similar to that produced by human drawing or writing.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"132-139"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49504399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Depth Map Design and Depth-based Effects With a Single Image 深度图设计和基于深度的效果与单个图像
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.08
J. Liao, Shuheng Shen, E. Eisemann
{"title":"Depth Map Design and Depth-based Effects With a Single Image","authors":"J. Liao, Shuheng Shen, E. Eisemann","doi":"10.20380/GI2017.08","DOIUrl":"https://doi.org/10.20380/GI2017.08","url":null,"abstract":"We present a novel pipeline to generate a depth map from a single image that can be used as input for a variety of artistic depth-based effects. In such a context, the depth maps do not have to be perfect but are rather designed with respect to a desired result. Consequently, our solution centers around user interaction and relies on a scribble-based depth editing. The annotations can be sparse, as the depth map is generated by a diffusion process, which is guided by image features. Additionally, we support a variety of controls, such as a non-linear depth mapping, a steering mechanism for the diffusion (e.g., directionality, emphasis, or reduction of the influence of image cues), and besides absolute, we also support relative depth indications. We demonstrate a variety of artistic 3D results, including wiggle stereoscopy and depth of field.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"57-64"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47974977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Content and Surface Aware Projection 内容和表面感知投影
Proceedings. Graphics Interface (Conference) Pub Date : 2017-06-01 DOI: 10.20380/GI2017.04
Long Mai, Hoang Le, Feng Liu
{"title":"Content and Surface Aware Projection","authors":"Long Mai, Hoang Le, Feng Liu","doi":"10.20380/GI2017.04","DOIUrl":"https://doi.org/10.20380/GI2017.04","url":null,"abstract":"Image projection is important for many applications in entertainment industry, augmented reality, and computer graphics. However, perceived distortion is often introduced by projection, which is a common problem of a projector system. Compensating such distortion for projection on non-trivial surfaces is often very challenging. In this paper, we propose a novel method to pre-warp the image such that it appears as distortion-free as possible on the surface after projection. Our method estimates a desired optimal warping function via an optimization framework. Specifically, we design an objective energy function that models the perceived distortion in projection results. By taking into account both the geometry of the surface and the image content, our method can produce more visually plausible projection results compared with traditional projector systems. We demonstrate the effectiveness of our method with projection results on a wide variety of images and surface geometries.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"24-32"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48762020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信