Proceedings of the 30th Spring Conference on Computer Graphics最新文献

筛选
英文 中文
Modeling and representing materials in the wild 在野外建模和表示材料
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2700379
K. Bala
{"title":"Modeling and representing materials in the wild","authors":"K. Bala","doi":"10.1145/2643188.2700379","DOIUrl":"https://doi.org/10.1145/2643188.2700379","url":null,"abstract":"Our everyday life brings us in contact with a rich range of materials that contribute to both the utility and aesthetics of our environment. Human beings are very good at using subtle distinctions in appearance to distinguish between materials (e.g., silk vs. cotton, laminate vs. granite). Capturing these visually important, yet subtle, distinctions is critical for applications in many domains: in virtual and augmented reality fueled by the advent of devices like Google Glass, in virtual prototyping for industrial design, in ecommerce and retail, in textile design and prototyping, in interior design and remodeling, and in games and movies. Understanding how humans perceive materials can drive better graphics and vision algorithms for material recognition and understanding, and material reproduction. As a first step towards achieving this goal, it is useful to collect information about the vast range of materials that we encounter in our daily lives. We introduce two new crowdsourced databases of material annotations to drive better material-driven exploration. OpenSurfaces is a rich, labeled database consisting of thousands of examples of surfaces segmented from consumer photographs of interiors, and annotated with material parameters, texture information, and contextual information. IIW (Intrinsic Images in theWild) is a database of pairwise material annotations of points in images that is useful for decomposing images in the wild into material and lighting layers. Together these databases can drive various material-based applications like surface retexturing, intrinsic image decomposition, intelligent material-based image browsing, and material design.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115602726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational photography: coding in space and time 计算摄影:空间和时间的编码
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2700583
B. Masiá
{"title":"Computational photography: coding in space and time","authors":"B. Masiá","doi":"10.1145/2643188.2700583","DOIUrl":"https://doi.org/10.1145/2643188.2700583","url":null,"abstract":"Computational photography emerged as a multidisciplinary field at the intersection of optics, computer vision, and computer graphics, with the objective of acquiring richer representations of a scene than those that conventional cameras can capture. The basic idea is to somehow code the information before it reaches the sensor, so that a posterior decoding will yield the final image (or video, light field, focal stack, etc). We describe here two examples of computational photography. One deals with coded apertures for the problem of defocus deblurring, and is a classical example of this coding-decoding scheme. The other is an ultrafast imaging system, the first to be able to capture light propagation in macroscopic high resolution scenes at 0.5 trillion frames per second.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"390 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127590942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bounding volume hierarchies versus kd-trees on contemporary many-core architectures 在当代多核体系结构上,边界卷层次结构与kd树
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643196
Marek Vinkler, V. Havran, Jiří Bittner
{"title":"Bounding volume hierarchies versus kd-trees on contemporary many-core architectures","authors":"Marek Vinkler, V. Havran, Jiří Bittner","doi":"10.1145/2643188.2643196","DOIUrl":"https://doi.org/10.1145/2643188.2643196","url":null,"abstract":"We present a performance comparison of bounding volume hierarchies and kd-trees for ray tracing on many-core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for maximum performance of tracing rays irrespective of the time needed for their build. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd-trees for simple and moderately complex scenes. Kd-trees, on the other hand, have higher performance for complex scenes, in particular for those with occlusion.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122397193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Position based skinning of skeleton-driven deformable characters 骨骼驱动的可变形角色的基于位置的皮肤
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643194
Nadine Abu Rumman, M. Fratarcangeli
{"title":"Position based skinning of skeleton-driven deformable characters","authors":"Nadine Abu Rumman, M. Fratarcangeli","doi":"10.1145/2643188.2643194","DOIUrl":"https://doi.org/10.1145/2643188.2643194","url":null,"abstract":"This paper presents a real-time skinning technique for character animation based on a two-layered deformation model. For each frame, the skin of a generic character is first deformed by using a classic linear blend skinning approach, then the vertex positions are adjusted according to a Position Based Dynamics schema. We define geometric constraints which mimic the flesh behavior and produce interesting effects like volume conservation and secondary animations, in particular passive jiggling behavior, without relying on a predefined training set of poses. Once the whole model is defined, the character animation is synthesized in real-time without suffering of the inherent artefacts of classic interactive skinning techniques, such as the \"candy-wrapper\" effect or undesired skin bulging.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Hybrid color model for image retrieval based on fuzzy histograms 基于模糊直方图的图像检索混合颜色模型
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643198
Vedran Ljubovic, H. Supic
{"title":"Hybrid color model for image retrieval based on fuzzy histograms","authors":"Vedran Ljubovic, H. Supic","doi":"10.1145/2643188.2643198","DOIUrl":"https://doi.org/10.1145/2643188.2643198","url":null,"abstract":"A hybrid color model is a color descriptor formed by combining different channels from several other color models. In computer graphics applications such models are rarely used due to redundancy. However, hybrid color models may be of interest for Content-Based Image Retrieval (CBIR). Best features of each color model can be combined to obtain optimum retrieval performance. In this paper, a novel algorithm is proposed for selection of channels for a hybrid color model used in construction of a fuzzy color histogram. This algorithm is elaborated and implemented for use with several common reference datasets consisting of photographs of natural scenes. Result of this experimental procedure is a new hybrid color model named HSY. Using standard datasets and a standard metric for retrieval performance (ANMRR), it is shown that this new model can give an improved retrieval performance. In addition, this model is of interest for use in JPEG compressed domain due to simpler calculation.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131594481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sampling Gabor noise in the spatial domain 空域的采样Gabor噪声
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643193
Victor Charpenay, Bernhard Steiner, Przemyslaw Musialski
{"title":"Sampling Gabor noise in the spatial domain","authors":"Victor Charpenay, Bernhard Steiner, Przemyslaw Musialski","doi":"10.1145/2643188.2643193","DOIUrl":"https://doi.org/10.1145/2643188.2643193","url":null,"abstract":"Gabor noise is a powerful technique for procedural texture generation. Contrary to other types of procedural noise, its sparse convolution aspect makes it easily controllable locally. In this paper, we demonstrate this property by explicitly introducing spatial variations. We do so by linking the sparse convolution process to the parameterization of the underlying surface. Using this approach, it is possible to provide control maps for the parameters in a natural and convenient way. In order to derive intuitive control of the resulting textures, we accomplish a small study of the influence of the parameters of the Gabor kernel with respect to the outcome and we introduce a solution where we bind values such as the frequency or the orientation of the Gabor kernel to a user-provided control map in order to produce novel visual effects.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129454601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 30th Spring Conference on Computer Graphics 第30届计算机图形学春季会议论文集
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188
D. Gutierrez
{"title":"Proceedings of the 30th Spring Conference on Computer Graphics","authors":"D. Gutierrez","doi":"10.1145/2643188","DOIUrl":"https://doi.org/10.1145/2643188","url":null,"abstract":"Welcome to the 30th Spring Conference on Computer Graphics! This conference (\"probably the oldest regular annual meeting of computer graphics in Central Europe\") follows a long tradition of papers in all areas related to computer graphics, with topics ranging from rendering, to computational geometry or animation. I am excited to be the chair this year, and I'm looking forward to the presentations! As in previous years, the Central European Seminar on Computer Graphics (CESCG) is co-located with SCCG and serves the important function of encouraging young people in the field.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115453621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of direction-preserving layout strategies 保向布局策略综述
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643189
M. Steiger, J. Bernard, T. May, J. Kohlhammer
{"title":"A survey of direction-preserving layout strategies","authors":"M. Steiger, J. Bernard, T. May, J. Kohlhammer","doi":"10.1145/2643188.2643189","DOIUrl":"https://doi.org/10.1145/2643188.2643189","url":null,"abstract":"In this paper we analyze different layout algorithms that preserve relative directions in geo-referenced networks. This is an important criterion for many sensor networks such as the electric grid and other supply networks, because it enables the user to match the geographic setting with the drawing on the screen. Even today, the layout of these networks are often created manually. This is due to the requirement that these layouts must respect geographic references but should still be easy to read and understand. The range of available automatic algorithms spans from general graph layouts over schematic maps to semi-realistic drawings. At first sight, schematics seem to be a promising compromise between geographic correctness and readability. The former property exploits the mental map of the user while the latter makes it easier for the user to learn about the network structure. We investigate different algorithms for such maps together with different visualization techniques. In particular, the group of octi-linear layouts is prominent in handcrafted subway maps. These algorithms have been used extensively to generate drawings for subway maps. Also known as Metro Map layouts, only horizontal, vertical and diagonal directions are allowed. This increases flexibility and makes the resulting layout look similar to the well-known subway maps of large cities. The key difference to general graph layout algorithms is that geographic relations are respected in terms of relative directions. However, it is not clear, whether this metaphor can be transferred from metro maps to other domains. We discuss applicability of these different approaches for geo-based networks in general with the electric grid as a use-case scenario.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133631188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Skeleton-based matching for animation transfer and joint detection 基于骨骼的动画传递和关节检测匹配
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643197
Martin Madaras, Michal Piovarči, J. Dadová, Roman Franta, Tomás Kovacovský
{"title":"Skeleton-based matching for animation transfer and joint detection","authors":"Martin Madaras, Michal Piovarči, J. Dadová, Roman Franta, Tomás Kovacovský","doi":"10.1145/2643188.2643197","DOIUrl":"https://doi.org/10.1145/2643188.2643197","url":null,"abstract":"In this paper we present a new algorithm for establishing correspondence between objects based on matching of extracted skeletons. First, a point cloud of an input model is scanned. Second, a skeleton is extracted from the scanned point cloud. In the last step, all the extracted skeletons are matched based on valence of vertices and segment lengths. The matching process yields into two direct applications - topological mapping and segment mapping. Topological mapping can be used for detection of joint positions from multiple scans of articulated figures in different poses. Segment mapping can be used for animation transfer and for transferring of arbitrary surface per-vertex properties. Our approach is unique, because it is based on matching of extracted skeletons only and does not require vertex correspondence.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131769804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Rapid modelling of interactive geological illustrations with faults and compaction 具有断层和压实作用的交互式地质插图的快速建模
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643201
Mattia Natali, J. Parulek, Daniel Patel
{"title":"Rapid modelling of interactive geological illustrations with faults and compaction","authors":"Mattia Natali, J. Parulek, Daniel Patel","doi":"10.1145/2643188.2643201","DOIUrl":"https://doi.org/10.1145/2643188.2643201","url":null,"abstract":"In this paper, we propose new methods for building geological illustrations and animations. We focus on allowing geologists to create their subsurface models by means of sketches, to quickly communicate concepts and ideas rather than detailed information. The result of our sketch-based modelling approach is a layer-cake volume representing geological phenomena, where each layer is rock material which has accumulated due to a user-defined depositional event. Internal geological structures can be inspected by different visualization techniques that we employ. Faulting and compaction of rock layers are important processes in geology. They can be modelled and visualized with our technique. Our representation supports non-planar faults that a user may define by means of sketches. Real-time illustrative animations are achieved by our GPU accelerated approach.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128446493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信