2014 27th SIBGRAPI Conference on Graphics, Patterns and Images最新文献

筛选
英文 中文
Image-Based Streamsurfaces 基于图像的Streamsurfaces
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.30
G. Machado, F. Sadlo, T. Ertl
{"title":"Image-Based Streamsurfaces","authors":"G. Machado, F. Sadlo, T. Ertl","doi":"10.1109/SIBGRAPI.2014.30","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.30","url":null,"abstract":"Streamsurfaces are of fundamental importance to visualization of flows. Among other features, they offer strong capabilities in revealing flow behavior (e.g., in the vicinity of vortices), and are an essential tool for the computation of 2D separatrices in vector field topology. Computing streamsurfaces is, however, typically expensive due to the difficult triangulation involved, in particular when triangle sizes are kept in the order of the size of a pixel. We investigate image-based approaches for rendering streamsurfaces without triangulation, and propose a new technique that renders them by dense streamlines. Although our technique does not perform triangulation, it does not depend on user parametrization to avoid noticeable gaps. Our GPU-based implementation shows that our technique provides interactive frame rates and low memory usage in practical applications. We also show that previous texture-based flow visualization approaches can be integrated with our method, for example, for the visualization of flow direction with line integral convolution.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133157086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Fast Feature Tracking Algorithm for Visual Odometry and Mapping Based on RGB-D Sensors 基于RGB-D传感器的视觉里程测量与映射快速特征跟踪算法
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.13
Bruno M. F. Silva, L. Gonçalves
{"title":"A Fast Feature Tracking Algorithm for Visual Odometry and Mapping Based on RGB-D Sensors","authors":"Bruno M. F. Silva, L. Gonçalves","doi":"10.1109/SIBGRAPI.2014.13","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.13","url":null,"abstract":"The recent introduction of low cost sensors such as the Kinect allows the design of real-time applications (i.e. for Robotics) that exploit novel capabilities. One such application is Visual Odometry, a fundamental module of any robotic platform that uses the synchronized color/depth streams captured by these devices to build a map representation of the environment at the same that the robot is localized within the map. Aiming to minimize error accumulation inherent to the process of robot localization, we design a visual feature tracker that works as the front-end of a Visual Odometry system for RGB-D sensors. Feature points are added to the tracker selectively based on pre-specified criteria such as the number of currently active points and their spatial distribution throughout the image. Our proposal is a tracking strategy that allows real-time camera pose computation (average of 24.847 ms per frame) despite the fact that no specialized hardware (such as modern GPUs) is employed. Experiments carried out on publicly available benchmark and datasets demonstrate the usefulness of the method, which achieved RMSE rates superior to the state-of-the-art RGB-D SLAM algorithm.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
WebcamPaperPen: A Low-Cost Graphics Tablet WebcamPaperPen:一款低成本的图形平板电脑
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.54
Gustavo Pfeiffer, R. Marroquim, Antonio A. F. Oliveira
{"title":"WebcamPaperPen: A Low-Cost Graphics Tablet","authors":"Gustavo Pfeiffer, R. Marroquim, Antonio A. F. Oliveira","doi":"10.1109/SIBGRAPI.2014.54","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.54","url":null,"abstract":"We present an inexpensive, practical, easy to set up and modestly precise system to generate computer mouse input in a similar fashion to a graphics tablet, using webcam, paper, pen and a desk lamp. None of the existing methods, to our knowledge, solves the task with all these qualities. Our method detects clicks using the pen shadow and computes the mouse cursor position by predicting where the pen tip and its shadow will hit each other. Our method employs a series of image processing algorithms and heuristics to extract interest features with subpixel precision, projective geometry operations to reconstruct the real-world position, and hysteresis filters to improve stability. We also provide an algorithm for easy calibration. Finally, the quality of our method is evaluated with user tests and a quantitative experiment.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115245377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Interactive Object Class Segmentation for Mobile Devices 面向移动设备的交互式对象类分割
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.35
I. Gallo, Alessandro Zamberletti, L. Noce
{"title":"Interactive Object Class Segmentation for Mobile Devices","authors":"I. Gallo, Alessandro Zamberletti, L. Noce","doi":"10.1109/SIBGRAPI.2014.35","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.35","url":null,"abstract":"In this paper we propose an interactive approach for object class segmentation of natural images on touch-screen capable mobile devices. The key research question to which this paper tries to give an answer is: can we effectively correct the errors committed by an automatic or semi-automatic figure-ground segmentation algorithm while also providing real time feedback to the user on a low computational power mobile device? Many research works focused on improving automatic or semi-automatic figure-ground segmentation algorithms, but none tried to take advantage of the existing touch-screen technology integrated in most modern mobile devices to optimize the segmentation results of these algorithms. Our key idea is to use super-pixels as interactive buttons that can be quickly tapped by the user to be added or removed from an initial low quality segmentation mask, with the aim of correcting the segmentation errors and produce a satisfying final result. We performed an extensive analysis of the proposed approach by implementing it both on a desktop computer and a mid-range Android device, even though our method is extremely simple, the results we obtained are comparable with those achieved by other state-of-the-art interactive segmentation algorithms. As such, we believe that the proposed approach can be exploited by most image editing mobile applications to provide a simple but highly effective method for interactive object class segmentation.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114641618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Portable Digital in-Line Holography Platform for Sperm Cell Visualization and Quantification 用于精子细胞可视化和定量的便携式数字在线全息平台
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.39
A. Sobieranski, F. Inci, H. Cumhur Tekin, E. Comunello, Aldo von Wangenheim, U. Demirci
{"title":"Portable Digital in-Line Holography Platform for Sperm Cell Visualization and Quantification","authors":"A. Sobieranski, F. Inci, H. Cumhur Tekin, E. Comunello, Aldo von Wangenheim, U. Demirci","doi":"10.1109/SIBGRAPI.2014.39","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.39","url":null,"abstract":"In this paper a new portable Digital In-line Holography Platform for biological micro-scale imaging of sperm samples is presented. This platform is based on the shadow imaging principle, where biological samples are illuminated by a nearly coherent light source, and shadows are recorded into a CMOS imaging sensor with no lens requirement. The projected shadows present holographic signatures, storing more than bidimensional information of the analyzed sample. To improve resolution and suppress noise of the acquired holograms, a multi-frame technique based on high-dynamic range imaging combined with summation of consecutive frames over time was used. Finally, decoding of holograms is performed by an efficient and fast phase-recovery method, where morphological details of the sample can be obtained similarly a conventional brightfield microscopy image. From an image formation point of view, the proposed portable approach is able to visualize biological samples in high synthetic numerical aperture values with a spatial resolution of 1-2 μm on a field-of-view of ≈30 mm2. This field-of-view is consistently bigger than a conventional microscope imaged area with no mosaic reconstruction, and the achieved resolution is obtained using a single illumination source with no moving parts, array of LEDs or sift of the light source. Validation of the proposed portable approach was performed using various human samples, where conventional microscopy was used for confirmation purposes.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126258484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pre-alignment for Co-registration in Native Space 原生空间协同配准的预对准
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.40
Shin-Ting Wu, A. C. Valente, L. Watanabe, C. Yasuda, A. Coan, F. Cendes
{"title":"Pre-alignment for Co-registration in Native Space","authors":"Shin-Ting Wu, A. C. Valente, L. Watanabe, C. Yasuda, A. Coan, F. Cendes","doi":"10.1109/SIBGRAPI.2014.40","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.40","url":null,"abstract":"For nonlesional patients, the correct localization of the epileptogenic foci in native space remains a great challenge. Non-invasive functional PET images that provide information about cerebral activities may reveal the origin of seizure activity, but without precise anatomical detail. Co-registration of the functional images with MR images on the basis of maximization of mutual information (MMI) has shown to be very promising in improving presurgical evaluation. Nevertheless, a mutual information (MI) function is non-convex and the convergence of an algorithm to its optimum is guaranteed only if the initial estimate lies in its convex vicinity. We present in this paper a generally applicable method that pre-aligns the DICOM images such that their relative position becomes close to an optimum. The key to our solution is a robust user-guided interactive procedure to extract valid voxels, for both the centroid estimation and the registration. Aiming at comparative analysis, we introduce a numerical condition to quantify registration errors. The results are acceptable when we consider the intrinsic problems of the MMI-based registration algorithm we implemented.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132332628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Tampering Detection of Audio-Visual Content Using Encrypted Watermarks 基于加密水印的视听内容篡改检测
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.50
Ronaldo Rigoni, P. Freitas, Mylène C. Q. Farias
{"title":"Tampering Detection of Audio-Visual Content Using Encrypted Watermarks","authors":"Ronaldo Rigoni, P. Freitas, Mylène C. Q. Farias","doi":"10.1109/SIBGRAPI.2014.50","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.50","url":null,"abstract":"In this paper, we present a framework for detecting tampered information in digital videos. Using the proposed technique is possible to detect several types of tampering with a pixel granularity. The framework uses a combination of temporal and spatial watermarks that do not decrease the perceived quality of the host videos. We use a modified version of Quantization Index Modulation (QIM) algorithm to store the watermarks. Since QIM is a fragile watermarking scheme, it is possible to detect local, global, and temporal tampers and also estimate the attack type. The framework is fast, robust, and accurate.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130333713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving On-Patient Medical Data Visualization in a Markerless Augmented Reality Environment by Volume Clipping 通过体积裁剪改进无标记增强现实环境中患者医疗数据的可视化
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.33
Márcio C. F. Macedo, A. Apolinario
{"title":"Improving On-Patient Medical Data Visualization in a Markerless Augmented Reality Environment by Volume Clipping","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.1109/SIBGRAPI.2014.33","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.33","url":null,"abstract":"To improve the human perception of an augmented reality scene, its virtual and real entities can be rendered according to the focus+context visualization. This paradigm is specially important in the field of on-patient medical data visualization, as it provides insight to the physicians about the spatial relations between the patient's anatomy (focus region) and his entire body (context region). However, the current existing methods proposed in this field do not give special treatment to the effect of volume clipping, which can open new ways for physicians to explore and understand the entire scene. In this paper we introduce an on-patient focus+context medical data visualization based on volume clipping. It is proposed in a markerless augmented reality environment. From the estimated camera pose, the volumetric medical data can be displayed to a physician inside the patient's anatomy at the location of the real anatomy. To improve the visual quality of the final scene, three methods based on volume clipping are proposed to allow new focus+context visualizations. Moreover, the whole solution supports occlusion handling. From the evaluation of the proposed techniques, the results obtained highlight that these methods improve the visual quality of the final rendering. Furthermore, the application still runs in realtime.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127717868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Simplified Training for Gesture Recognition 手势识别的简化训练
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.46
Romain Faugeroux, Thales Vieira, D. M. Morera, T. Lewiner
{"title":"Simplified Training for Gesture Recognition","authors":"Romain Faugeroux, Thales Vieira, D. M. Morera, T. Lewiner","doi":"10.1109/SIBGRAPI.2014.46","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.46","url":null,"abstract":"Since gesture is a fundamental form of human communication, its recognition by a computer generates a strong interest for many applications such as natural user interface and gaming. The popularization of real-time depth sensors brings such applications to the public at large. However, familiar gestures are culture-specific, and their automatic recognition must therefore result from a machine learning process. In particular this requires either teaching the user how to communicate with the machine, such as for popular mobile devices or gaming consoles, or tailoring the application to a specific public. The latter option serves a large number of applications such as sport monitoring, virtual reality or surveillance -- although it requires a usually tedious training. This work intends to simplify the training required by gesture recognition methods. While the traditional procedure uses a set of key poses, which must be defined and trained, prior to a set of gestures that must also be defined and trained, we propose to automatically deduce the set of key poses from the gesture training. We represent a record of gestures as a curve in high dimension and robustly segment it according to the curvature of that curve. Then we use an asymmetric Hausdorff distance between gestures to define a discriminant key pose as the most distant pose between gestures. This further allows to dynamically group gestures by similarity. The training only requires the user to perform the gestures and eventually refine the gesture labeling. The generated set of key poses and gestures then fits in previous human action recognition algorithms. Furthermore, this semi-supervised learning allows re-using a previous training to extend the set of gestures the computer should be able to recognize. Experimental results show that the automatically generated discriminant key poses lead to similar recognition accuracy as previous work.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Oriented Relative Fuzzy Connectedness: Theory, Algorithms, and Applications in Image Segmentation 面向相对模糊连通性:理论、算法及其在图像分割中的应用
2014 27th SIBGRAPI Conference on Graphics, Patterns and Images Pub Date : 2014-08-26 DOI: 10.1109/SIBGRAPI.2014.38
H. C. Bejar, P. A. Miranda
{"title":"Oriented Relative Fuzzy Connectedness: Theory, Algorithms, and Applications in Image Segmentation","authors":"H. C. Bejar, P. A. Miranda","doi":"10.1109/SIBGRAPI.2014.38","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2014.38","url":null,"abstract":"Anatomical structures and tissues are often hard to be segmented in medical images due to their poorly defined boundaries, i.e., low contrast in relation to other nearby false boundaries. The specification of the boundary polarity can help to alleviate part of this problem. In this work, we discuss how to incorporate this property in the Relative Fuzzy Connectedness (RFC) framework. We include a theoretical proof of the optimality of the new algorithm, named Oriented Relative Fuzzy Connectedness (ORFC), in terms of an oriented energy function subject to the seed constraints, and show the obtained gains in accuracy using medical images of MRI and CT images of thoracic studies.","PeriodicalId":146229,"journal":{"name":"2014 27th SIBGRAPI Conference on Graphics, Patterns and Images","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132701745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信