2014 IEEE International Conference on Image Processing (ICIP)最新文献

筛选
英文 中文
3D trajectories for action recognition 用于动作识别的3D轨迹
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025848
Michal Koperski, P. Bilinski, F. Brémond
{"title":"3D trajectories for action recognition","authors":"Michal Koperski, P. Bilinski, F. Brémond","doi":"10.1109/ICIP.2014.7025848","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025848","url":null,"abstract":"Recent development in affordable depth sensors opens new possibilities in action recognition problem. Depth information improves skeleton detection, therefore many authors focused on analyzing pose for action recognition. But still skeleton detection is not robust and fail in more challenging scenarios, where sensor is placed outside of optimal working range and serious occlusions occur. In this paper we investigate state-of-the-art methods designed for RGB videos, which have proved their performance. Then we extend current state-of-the-art algorithms to benefit from depth information without need of skeleton detection. In this paper we propose two novel video descriptors. First combines motion and 3D information. Second improves performance on actions with low movement rate. We validate our approach on challenging MSR Daily Activty 3D dataset.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86101752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Approximate Bayesian computation, stochastic algorithms and non-local means for complex noise models 复杂噪声模型的近似贝叶斯计算、随机算法和非局部均值
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025573
C. Kervrann, Philippe Roudot, F. Waharte
{"title":"Approximate Bayesian computation, stochastic algorithms and non-local means for complex noise models","authors":"C. Kervrann, Philippe Roudot, F. Waharte","doi":"10.1109/ICIP.2014.7025573","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025573","url":null,"abstract":"In this paper, we present a stochastic NL-means-based de-noising algorithm for generalized non-parametric noise models. First, we provide a statistical interpretation to current patch-based neighborhood filters and justify the Bayesian inference that needs to explicitly accounts for discrepancies between the model and the data. Furthermore, we investigate the Approximate Bayesian Computation (ABC) rejection method combined with density learning techniques for handling situations where the posterior is intractable or too prohibitive to calculate. We demonstrate our stochastic Gamma NL-means (SGNL) on real images corrupted by non-Gaussian noise.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89354927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Waterpixels: Superpixels based on the watershed transformation 水像素:基于分水岭变换的超像素
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025882
V. Machairas, Etienne Decencière, Thomas Walter
{"title":"Waterpixels: Superpixels based on the watershed transformation","authors":"V. Machairas, Etienne Decencière, Thomas Walter","doi":"10.1109/ICIP.2014.7025882","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025882","url":null,"abstract":"Many sophisticated segmentation algorithms rely on a first low-level segmentation step where an image is partitioned into homogeneous regions with enforced compactness and adherence to object boundaries. These regions are called “superpixels”. While the marker controlled watershed transformation should in principle be well suited for this type of application, it has never been seriously tested in this setup, and comparisons to other methods were not made with the best possible settings. Here, we provide a scheme for applying the watershed transform for superpixel generation, where we use a spatially regularized gradient to achieve a tunable trade-off between superpixel regularity and adherence to object boundaries. We quantitatively evaluate our method on the Berkeley segmentation database and show that we achieve comparable results to a previously published state-of-the art algorithm, while avoiding some of the arbitrary postprocessing steps the latter requires.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87940562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Computer-aided diagnostic system for prostate cancer detection and characterization combining learned dictionaries and supervised classification 结合学习字典与监督分类的前列腺癌检测与表征计算机辅助诊断系统
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025456
Jérôme Lehaire, Rémi Flamary, O. Rouvière, C. Lartizien
{"title":"Computer-aided diagnostic system for prostate cancer detection and characterization combining learned dictionaries and supervised classification","authors":"Jérôme Lehaire, Rémi Flamary, O. Rouvière, C. Lartizien","doi":"10.1109/ICIP.2014.7025456","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025456","url":null,"abstract":"This paper aims at presenting results of a computer-aided diagnostic (CAD) system for voxel based detection and characterization of prostate cancer in the peripheral zone based on multiparametric magnetic resonance (mp-MR) imaging. We propose an original scheme with the combination of a feature extraction step based on a sparse dictionary learning (DL) method and a supervised classification in order to discriminate normal {N}, normal but suspect {NS} tissues as well as different classes of cancer tissue whose aggressiveness is characterized by the Gleason score ranging from 6 {GL6} to 9 {GL9}. We compare the classification performance of two supervised methods, the linear support vector machine (SVM) and the logistic regression (LR) classifiers in a binary classification task. Classification performances were evaluated over an mp-MR image database of 35 patients where each voxel was labeled, based on a ground truth, by an expert radiologist. Results show that the proposed method in addition to being explicable thanks to the sparse representation of the voxels compares well (AUC>0.8) with recent state-of-the-art performances. Preliminary visual analysis of example patient cancer probability maps indicate that cancer probabilities tend to increase as a function of the Gleason score.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87052152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Adaptive regularization of the NL-means for video denoising 视频去噪中nl均值的自适应正则化
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025547
Camille Sutour, Jean-François Aujol, C. Deledalle, J. Domenger
{"title":"Adaptive regularization of the NL-means for video denoising","authors":"Camille Sutour, Jean-François Aujol, C. Deledalle, J. Domenger","doi":"10.1109/ICIP.2014.7025547","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025547","url":null,"abstract":"We derive a denoising method based on an adaptive regularization of the non-local means. The NL-means reduce noise by using the redundancy in natural images. They compute a weighted average of pixels whose surroundings are close. This method performs well but it suffers from residual noise on singular structures. We use the weights computed in the NL-means as a measure of performance of the denoising process. These weights balance the data-fidelity term in an adapted ROF model, in order to locally perform adaptive TV regularization. Besides, this model can be adapted to different noise statistics and a fast resolution can be computed in the general case of the exponential family. We adapt this model to video denoising by using spatio-temporal patches. Compared to spatial patches, they offer better temporal stability, while the adaptive TV regularization corrects the residual noise observed around moving structures.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85385749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Contactless measurement of muscles fatigue by tracking facial feature points in a video 通过跟踪视频中的面部特征点来非接触式测量肌肉疲劳
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025849
Ramin Irani, Kamal Nasrollahi, T. Moeslund
{"title":"Contactless measurement of muscles fatigue by tracking facial feature points in a video","authors":"Ramin Irani, Kamal Nasrollahi, T. Moeslund","doi":"10.1109/ICIP.2014.7025849","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025849","url":null,"abstract":"Physical exercise may result in muscle tiredness which is known as muscle fatigue. This occurs when the muscles cannot exert normal force, or when more than normal effort is required. Fatigue is a vital sign, for example, for therapists to assess their patient's progress or to change their exercises when the level of the fatigue might be dangerous for the patients. The current technology for measuring tiredness, like Electromyography (EMG), requires installing some sensors on the body. In some applications, like remote patient monitoring, this however might not be possible. To deal with such cases, in this paper we present a contactless method based on computer vision techniques to measure tiredness by detecting, tracking, and analyzing some facial feature points during the exercise. Experimental results on several test subjects and comparing them against ground truth data show that the proposed system can properly find the temporal point of tiredness of the muscles when the test subjects are doing physical exercises.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79346151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A particle swarm optimization inspired tracker applied to visual tracking 将粒子群算法应用于视觉跟踪
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025085
C. Mollaret, F. Lerasle, I. Ferrané, J. Pinquier
{"title":"A particle swarm optimization inspired tracker applied to visual tracking","authors":"C. Mollaret, F. Lerasle, I. Ferrané, J. Pinquier","doi":"10.1109/ICIP.2014.7025085","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025085","url":null,"abstract":"Visual tracking is dynamic optimization where time and object state simultaneously influence the problem. In this paper, we intend to show that we built a tracker from an evolutionary optimization approach, the PSO (Particle Swarm optimization) algorithm. We demonstrated that an extension of the original algorithm where system dynamics is explicitly taken into consideration, it can perform an efficient tracking. This tracker is also shown to outperform SIR (Sampling Importance Resampling) algorithm with random walk and constant velocity model, as well as a previously PSO inspired tracker, SPSO (Sequential Particle Swarm Optimization). Experiments were performed both on simulated data and real visual RGB-D information. Our PSO inspired tracker can be a very effective and robust alternative for visual tracking.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85512872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Sensarea, a general public video editing application senarea,一个通用的公共视频编辑应用程序
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025696
P. Bertolino
{"title":"Sensarea, a general public video editing application","authors":"P. Bertolino","doi":"10.1109/ICIP.2014.7025696","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025696","url":null,"abstract":"In this demonstration, we present an advanced prototype of a novel general public software application that provides the user with a set of interactive tools to select and accurately track multiple objects in a video. The originality of the proposed software is that it doesn't impose a rigid modus operandi and that automatic and manual tools can be used at any moment for any object. Moreover, it is the first time that powerful video object segmentation tools are integrated in a friendly, industrial and non commercial application dedicated to accurate object tracking. With our software, special effects can be applied to the tracked objects and saved to a video file, and the object masks can also be exported for applications that need ground truth data or that want to improve the user experience with clickable videos.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85927478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Super-resolution from a low- and partial high-resolution image pair 低分辨率和部分高分辨率图像对的超分辨率
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025430
Moncef Hidane, Jean-François Aujol, Y. Berthoumieu, C. Deledalle
{"title":"Super-resolution from a low- and partial high-resolution image pair","authors":"Moncef Hidane, Jean-François Aujol, Y. Berthoumieu, C. Deledalle","doi":"10.1109/ICIP.2014.7025430","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025430","url":null,"abstract":"The classical super-resolution (SR) setting starts with a set of low-resolution (LR) images related by subpixel shifts and tries to reconstruct a single high-resolution (HR) image. In some cases, partial observations about the HR image are also available. Trying to complete the missing HR data without any reference to LR ones is an inpainting (or completion) problem. In this paper, we consider the problem of recovering a single HR image from a pair consisting of a complete LR and incomplete HR image pair. This setting arises in particular when one wants to fuse image data captured at two different resolutions. We propose an efficient algorithm that allows to take advantage of both image data by first learning nonlocal interactions from an interpolated version of the LR image using patches. Those interactions are then used by a convex energy function whose minimization yields a super-resolved complete image.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82873542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Handling noise in image deconvolution with local/non-local priors 用局部/非局部先验处理图像反卷积中的噪声
2014 IEEE International Conference on Image Processing (ICIP) Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025535
Hicham Badri, H. Yahia
{"title":"Handling noise in image deconvolution with local/non-local priors","authors":"Hicham Badri, H. Yahia","doi":"10.1109/ICIP.2014.7025535","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025535","url":null,"abstract":"Non-blind deconvolution consists in recovering a sharp latent image from a blurred image with a known kernel. Deconvolved images usually contain unpleasant artifacts due to the ill-posedness of the problem even when the kernel is known. Making use of natural sparse priors has shown to reduce ringing artifacts but handling noise remains limited. On the other hand, non-local priors have shown to give the best results in image denoising. We propose in this paper to combine both local and non-local priors to handle noise. We show that the blur increases the self-similarity within an image and thus makes non-local priors a good choice for denoising blurred images. However, denoising introduces outliers which are not Gaussian and should be well modeled. Experiments show that our method produces a better image reconstruction both visually and empirically compared to methods some popular methods.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82986853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信