Proceedings of the 30th Spring Conference on Computer Graphics最新文献

筛选
英文 中文
Fast and Furious: How the web got turbo charged just in time… 速度与激情:网络是如何及时加速的……
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2022-03-17 DOI: 10.1145/2643188.3527459
{"title":"Fast and Furious: How the web got turbo charged just in time…","authors":"","doi":"10.1145/2643188.3527459","DOIUrl":"https://doi.org/10.1145/2643188.3527459","url":null,"abstract":"","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115428082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Furious: How the web got turbo charged just in time? 速度与激情:网络是如何及时加速的?
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2022-03-17 DOI: 10.1145/2643188.3527460
M. Franz
{"title":"Fast and Furious: How the web got turbo charged just in time?","authors":"M. Franz","doi":"10.1145/2643188.3527460","DOIUrl":"https://doi.org/10.1145/2643188.3527460","url":null,"abstract":"","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133969632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cheap rendering vs. costly annotation: rendered omnidirectional dataset of vehicles 便宜的渲染vs.昂贵的注释:渲染的车辆全方位数据集
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643191
Peter Slosár, Roman Juránek, A. Herout
{"title":"Cheap rendering vs. costly annotation: rendered omnidirectional dataset of vehicles","authors":"Peter Slosár, Roman Juránek, A. Herout","doi":"10.1145/2643188.2643191","DOIUrl":"https://doi.org/10.1145/2643188.2643191","url":null,"abstract":"Detection of vehicles in traffic surveillance needs good and large training datasets in order to achieve competitive detection rates. We are showing an approach to automatic synthesis of custom datasets, simulating various major influences: viewpoint, camera parameters, sunlight, surrounding environment, etc. Our goal is to create a competitive vehicle detector which \"has not seen a real car before.\" We are using Blender as the modeling and rendering engine. A suitable scene graph accompanied by a set of scripts was created, that allows simple configuration of the synthesized dataset. The generator is also capable of storing rich set of metadata that are used as annotations of the synthesized images. We synthesized several experimental datasets, evaluated their statistical properties, as compared to real-life datasets. Most importantly, we trained a detector on the synthetic data. Its detection performance is comparable to a detector trained on state-of-the-art real-life dataset. Synthesis of a dataset of 10,000 images takes only several hours, which is much more efficient, compared to manual annotation, let aside the possibility of human error in annotation.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121902241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive BVH: an evaluation of an efficient shared data structure for interactive simulation 自适应BVH:一种有效的交互式仿真共享数据结构的评估
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643192
Colin Fowler, Michael J. Doyle, M. Manzke
{"title":"Adaptive BVH: an evaluation of an efficient shared data structure for interactive simulation","authors":"Colin Fowler, Michael J. Doyle, M. Manzke","doi":"10.1145/2643188.2643192","DOIUrl":"https://doi.org/10.1145/2643188.2643192","url":null,"abstract":"The strive towards realistic simulations at interactive speeds has driven research in both rendering and physical simulation. This heightened realism involves larger data sets and data structures, and comes at a high computational cost. We investigate simulations involving collision detection and real-time ray-tracing and note similarities in the data structures used to accelerate them. Our investigation demonstrates that it is possible to utilize a single Acceleration Data Structure (ADS) for both subsystems of an interactive simulation, even though they benefit from different characteristics. Typically, the collision detection and ray-tracing system build ADSs that satisfy their specific needs. We argue for a shared adaptive ADS that can be optimized for both collision detection and ray-tracing. The collision detection system builds this adaptive ADS and the ray-tracing algorithm uses the same adaptive ADS after the collision detection system has resolved potential collisions, therefore saving memory, execution time and power. The results show that compromises need not be made on build heuristics. Furthermore, the ADS may be optimized for primary and secondary rays and consequently save more memory, execution time and large quantities of power.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129392186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computation and perception: building better displays 计算和感知:构建更好的显示
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2699750
D. Gutierrez
{"title":"Computation and perception: building better displays","authors":"D. Gutierrez","doi":"10.1145/2643188.2699750","DOIUrl":"https://doi.org/10.1145/2643188.2699750","url":null,"abstract":"Computational displays have recently emerged as a fascinating new research area. By combining smart processing with novel optics and electronics, their ultimate goal is to provide a better viewing experience. This may be achieved by means of an extended dynamic range, a better color reproduction, or even glasses-free stereoscopic techniques. However, no matter what the improvements are, these will always be bounded by the limitations imposed by current technology. We argue that by adding perceptual models of human vision to the design of the displays, some of these hard limitations can be circumvented, providing an enhanced viewing experience beyond what should be physically and technically possible. In this paper we show examples of how such perceptually-based strategy is currently being applied in different prototype implementations.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130006302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Live ultrasound-based particle visualization of blood flow in the heart 基于实时超声的心脏血流粒子可视化
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643200
Paolo Angelelli, S. Snare, H. Hauser, S. Nyrnes, L. Løvstakken, S. Bruckner
{"title":"Live ultrasound-based particle visualization of blood flow in the heart","authors":"Paolo Angelelli, S. Snare, H. Hauser, S. Nyrnes, L. Løvstakken, S. Bruckner","doi":"10.1145/2643188.2643200","DOIUrl":"https://doi.org/10.1145/2643188.2643200","url":null,"abstract":"We introduce an integrated method for the acquisition, processing and visualization of live, in-vivo blood flow in the heart. The method is based on ultrasound imaging, using a plane wave acquisition acquisition protocol, which produces high frame rate ensemble data that are efficiently processed to extract directional flow information not previously available based on conventional Doppler imaging. These data are then visualized using a tailored pathlet-based visualization approach, to convey the slice-contained dynamic movement of the blood in the heart. This is especially important when imaging patients with possible congenital heart diseases, who typically exhibit complex flow patterns that are challenging to interpret. With this approach, it now is possible for the first time to achieve a real-time integration-based visualization of 2D blood flow aspects based on ultrasonic imaging. We demonstrate our solution in the context of selected cases of congenital heart diseases in neonates, showing how our technique allows for a more accurate and intuitive visualization of shunt flow and vortices.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121502393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
History of SCCG SCCG的历史
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2700584
E. Ruzický, A. Ferko
{"title":"History of SCCG","authors":"E. Ruzický, A. Ferko","doi":"10.1145/2643188.2700584","DOIUrl":"https://doi.org/10.1145/2643188.2700584","url":null,"abstract":"We present the past, conferencing activities over three decades, celebrating the 30th anniversary of SCCG. By chance, this coincides with 95th years of research and education at Comenius University Bratislava. The highlights include the global timeline milestones, the oldest regular graphics conference in Central Europe history, the collocated world unique international student seminar CESCG, and an inevitably open conclusion.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128008676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple instances object detection 多实例对象检测
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643190
Z. Haladová, E. Sikudová
{"title":"Multiple instances object detection","authors":"Z. Haladová, E. Sikudová","doi":"10.1145/2643188.2643190","DOIUrl":"https://doi.org/10.1145/2643188.2643190","url":null,"abstract":"Since the beginning of the new century the growing popularity of markerless augmented reality (AR) applications inspired the research in the area of object instance detection, registration and tracking. The usage of common daily objects or specially developed fliers or magazines (e.g. IKEA) as AR markers became more popular than traditional ARtoolkit-like black/white patterns. Although there are many different methods for object instance detection emerging every year, very little attention is paid to the case where multiple instances of the same object are present in the scene and need to be augmented (e.g. a table full of fliers, several exemplars of historical coins in the museum, etc.). In this paper we review existing methods of multiple instance detection and propose a new method for grayscale images overcoming the limitations of previous methods.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131855260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Kinect-supported dataset creation for human pose estimation kinect支持的人体姿态估计数据集创建
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643195
Kamil Behún, A. Herout, A. Páldy
{"title":"Kinect-supported dataset creation for human pose estimation","authors":"Kamil Behún, A. Herout, A. Páldy","doi":"10.1145/2643188.2643195","DOIUrl":"https://doi.org/10.1145/2643188.2643195","url":null,"abstract":"Training and evaluation datasets for specific tasks of human pose estimation are hard to find. This paper presents an approach for rapid construction of a precisely annotated training dataset for human pose estimation of a sitting subject, intended especially for aeronautic cockpit. We propose to use Kinect as a tool for collecting ground truth to a purely visual dataset (for reasons defined by the application, use of Kinect or similar structured light-based approaches is impossible). Since Kinect annotation of individual joints might be imprecise at certain moments, manual post-processing of the acquired data is necessary and we propose a scheme for efficient and reliable manual post-annotation. We produced a dataset of 6,322 annotated frames, involving 11 human subjects recorded in various lighting conditions, different clothing, and varying background. Each frame contains one seated person in frontal view with annotation of pose and optical flow data. We used detectors of body parts based on Random Forest on the produced dataset in order to verify its usability. These preliminary results show that the detector can be trained successfully on the developed dataset and that the optical flow contributes to the detection accuracy considerably. The dataset and the intermediary data used during its creation is made publicly available. By this, we intend to support further research and evaluation in the specific topic of human pose estimation focused on a sitting subject in a cockpit scenario.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121064410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction 评估数据驱动的统计人体运动重建的协方差矩阵约束
Proceedings of the 30th Spring Conference on Computer Graphics Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643199
Christos Mousas, Paul F. Newbury, C. Anagnostopoulos
{"title":"Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction","authors":"Christos Mousas, Paul F. Newbury, C. Anagnostopoulos","doi":"10.1145/2643188.2643199","DOIUrl":"https://doi.org/10.1145/2643188.2643199","url":null,"abstract":"This paper presents the evaluation process of the character's motion reconstruction while constraints are applied to the covariance matrix of the motion prior learning process. For the evaluation process, a maximum a posteriori (MAP) framework is first generated, which receives input trajectories and reconstructs the motion of the character. Then, using various methods to constrain the covariance matrix, information that reflects certain assumptions about the motion reconstruction process is retrieved. Each of the covariance matrix constraints are evaluated by its ability to reconstruct the desired motion sequences either by using a large amount of motion data or by using a small dataset that contains only specific motions.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信