2007 IEEE International Conference on Image Processing最新文献

筛选
英文 中文
Shape Priors by Kernel Density Modeling of PCA Residual Structure 基于PCA残差结构核密度建模的形状先验
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4380022
J. P. Lewis, Iman Mostafavi, G. Sosinsky, M. Martone, Ruth West
{"title":"Shape Priors by Kernel Density Modeling of PCA Residual Structure","authors":"J. P. Lewis, Iman Mostafavi, G. Sosinsky, M. Martone, Ruth West","doi":"10.1109/ICIP.2007.4380022","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4380022","url":null,"abstract":"Modern image processing techniques increasingly use prior models of the expected distribution of objects. Principal component eigen-models are often selected for shape prior modeling, but are limited in capturing only the second order moment statistics. On the other hand, kernel densities can in concept reproduce arbitrary statistics, but are problematic for high dimensional data such as shapes. An evident approach is to combine these methods, using PCA to reduce the problem dimensionality, followed by kernel density modeling of the PCA coefficients. In this paper we show that useful algorithmic and editing operations can be formulated in term of this simple approach. The operations are illustrated in the context of point distribution shape models. Particular points can be rapidly evaluated as being plausible or outliers, and a plausible shape can be completed given limited operator input in a manually guided procedure. This \"PCA+KD\" approach is conceptually simple, scalable (becoming increasingly accurate with additional training data), provides improved modeling power, and supports useful algorithmic queries.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117097019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foveal Wavelet-Based Color Active Contour 基于中央凹小波的彩色活动轮廓
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378937
A. Maalouf, P. Carré, B. Augereau, C. Fernandez-Maloigne
{"title":"Foveal Wavelet-Based Color Active Contour","authors":"A. Maalouf, P. Carré, B. Augereau, C. Fernandez-Maloigne","doi":"10.1109/ICIP.2007.4378937","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378937","url":null,"abstract":"A framework for active contour segmentation in vector-valued images is presented. It is known that the standard active contour is a powerful segmentation method, yet it is susceptible to weak edges and image noise. The proposed scheme uses foveal wavelets for an accurate detection of the edges singularities of the image. The foveal wavelets introduced by Mallat (2000) are known by their high capability to precisely characterize the holder regularity of singularities. Therefore, image contours are accurately localized and are well discriminated from noise. Foveal wavelet coefficients are updated using the gradient descent algorithm to guide the snake deformation to the true boundaries of the objects being segmented. Thus, the curve flow corresponding to the proposed active contour holds formal existence, uniqueness, stability and correctness results in spite of the presence of noise where traditional snake approach may fail.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129944448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Abnormal Event Detection from Surveillance Video by Dynamic Hierarchical Clustering 基于动态层次聚类的监控视频异常事件检测
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379786
Fan Jiang, Ying Wu, A. Katsaggelos
{"title":"Abnormal Event Detection from Surveillance Video by Dynamic Hierarchical Clustering","authors":"Fan Jiang, Ying Wu, A. Katsaggelos","doi":"10.1109/ICIP.2007.4379786","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379786","url":null,"abstract":"The clustering-based approach for detecting abnormalities in surveillance video requires the appropriate definition of similarity between events. The HMM-based similarity defined previously falls short in handling the overfitting problem. We propose in this paper a multi-sample-based similarity measure, where HMM training and distance measuring are based on multiple samples. These multiple training data are acquired by a novel dynamic hierarchical clustering (DHC) method. By iteratively reclassifying and retraining the data groups at different clustering levels, the initial training and clustering errors due to overfitting will be sequentially corrected in later steps. Experimental results on real surveillance video show an improvement of the proposed method over a baseline method that uses single-sample-based similarity measure and spectral clustering.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128221702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
MAP Particle Selection in Shape-Based Object Tracking 基于形状的物体跟踪中的MAP粒子选择
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379835
A. Dore, C. Regazzoni, Mirko Musso
{"title":"MAP Particle Selection in Shape-Based Object Tracking","authors":"A. Dore, C. Regazzoni, Mirko Musso","doi":"10.1109/ICIP.2007.4379835","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379835","url":null,"abstract":"The Bayesian filtering for recursive state estimation and the shape-based matching methods are two of the most commonly used approaches for target tracking. The multiple hypothesis shape-based tracking (MHST) algorithm, proposed by the authors in a previous work, combines these two techniques using the particle filter algorithm. The state of the object is represented by a vector of the target corners (i.e. points in the image with high curvature) and the multiple state configurations (particles) are propagated in time with a weight associated to their probability. In this paper we demonstrate that, in the MHST, the likelihood probability used to update the weights is equivalent to the voting mechanism for generalized Hough transform (GHT)-based tracking. This statement gives an evident explanation about the suitability of a MAP (maximum a posteriori) estimate from the posterior probability obtained using MHST. The validity of the assertion is verified on real sequences showing the differences between the MAP and the MMSE estimate.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128479020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Generalized Multiple Instance Learning Algorithm for Iterative Distillation and Cross-Granular Propagation of Video Annotations 视频注释迭代蒸馏与跨颗粒传播的广义多实例学习算法
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379128
Feng Kang, M. Naphade
{"title":"A Generalized Multiple Instance Learning Algorithm for Iterative Distillation and Cross-Granular Propagation of Video Annotations","authors":"Feng Kang, M. Naphade","doi":"10.1109/ICIP.2007.4379128","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379128","url":null,"abstract":"Video annotation is an expensive but necessary task for most vision and learning problems that require building models of visual semantics. This annotation gets prohibitively expensive especially when annotation has to happen at finer grained levels of regions in the videos. One way around the finer grained annotation dilemma is to support annotation at coarser granularity and then propagate this annotation to the finer granularity in a concept-dependent way. In this paper we propose a new generalized multiple instance learning algorithm that can work with any underlying density modeling techniques, and help propagate semantic concepts provided at the coarse granularity of video key-frames to finer grained regions. Our experiments on the NIST TRECVID common annotation corpus reveal improvement in annotation propagation accuracy between 3% to a dramatic 161%.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"9 18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128516119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computer-Aided Grading of Neuroblastic Differentiation: Multi-Resolution and Multi-Classifier Approach 神经母细胞分化的计算机辅助分级:多分辨率和多分类方法
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379881
Jun Kong, Olcay Sertel, H. Shimada, K. Boyer, J. Saltz, M. Gürcan
{"title":"Computer-Aided Grading of Neuroblastic Differentiation: Multi-Resolution and Multi-Classifier Approach","authors":"Jun Kong, Olcay Sertel, H. Shimada, K. Boyer, J. Saltz, M. Gürcan","doi":"10.1109/ICIP.2007.4379881","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379881","url":null,"abstract":"In this paper, the development of a computer-aided system for the classification of grade of neuroblastic differentiation is presented. This automated process is carried out within a multi-resolution framework that follows a coarse-to-fine strategy. Additionally, a novel segmentation approach using the Fisher-Rao criterion, embedded in the generic expectation-maximization algorithm, is employed. Multiple decisions from a classifier group are aggregated using a two-step classifier combiner that consists of a majority voting process and a weighted sum rule using priori classifier accuracies. The developed system, when tested on 14,616 image tiles, had the best overall accuracy of 96.89%. Furthermore, multi-resolution scheme combined with automated feature selection process resulted in 34% savings in computational costs on average when compared to a previously developed single-resolution system. Therefore, the performance of this system shows good promise for the computer-aided pathological assessment of the neuroblastic differentiation in clinical practice.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128805163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Distributed Compression of Multi-View Images using a Geometrical Coding Approach 基于几何编码方法的多视图图像分布式压缩
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379611
N. Gehrig, P. Dragotti
{"title":"Distributed Compression of Multi-View Images using a Geometrical Coding Approach","authors":"N. Gehrig, P. Dragotti","doi":"10.1109/ICIP.2007.4379611","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379611","url":null,"abstract":"In this paper, we propose a distributed compression approach for multi-view images, where each camera efficiently encodes its visual information locally without requiring any collaboration with the other cameras. Such a compression scheme can be necessary for camera sensor networks, where each camera has limited power and communication resources and can only transmit data to a central base station. The correlation in the multi-view data acquired by a dense multi-camera system can be extremely large and should therefore be exploited at each encoder in order to reduce the amount of data transmitted to the receiver. Our distributed source coding approach is based on a quadtree decomposition method and uses some geometrical information about the scene and the position of the cameras to estimate this multi-view correlation. We assume that the different views can be modelled as 2D piecewise polynomial functions with ID linear boundaries and show how our approach applies in this context. Our simulation results show that our approach outperforms independent encoding of real multi-view images.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Super-Resolution using Motion and Defocus Cues 使用运动和散焦线索的超分辨率
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379992
K. Suresh, A. Rajagopalan
{"title":"Super-Resolution using Motion and Defocus Cues","authors":"K. Suresh, A. Rajagopalan","doi":"10.1109/ICIP.2007.4379992","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379992","url":null,"abstract":"Reconstruction-based super-resolution algorithms use either sub-pixel shifts or relative blur among low-resolution observations as a cue to obtain a high-resolution image. In this paper, we propose a super-resolution algorithm that exploits the information available in the low-resolution observations due to both sub-pixel shifts and relative blur to yield a better quality image. Performance analysis is carried out based on the Cramer-Rao lower bound. Several experimental results on synthetic and real images are given for validation.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129045982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Unseen Visible Watermarking 不可见的可见水印
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379296
Shang-Chih Chuang, Chun-Hsiang Huang, Ja-Ling Wu
{"title":"Unseen Visible Watermarking","authors":"Shang-Chih Chuang, Chun-Hsiang Huang, Ja-Ling Wu","doi":"10.1109/ICIP.2007.4379296","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379296","url":null,"abstract":"A novel data-hiding methodology, denoted as unseen visible watermarking (UVW), is proposed. The proposed scheme is inspired by real-world watermarks and possesses advantages of both visible and invisible watermarking schemes. After watermark embedding, the differences between the original work and the stego work are imperceptible under normal viewing conditions. However, when the hidden message is to be extracted, no explicit watermark extracting module is required. Semantically-meaningful watermark patterns can be directly recognized from the stego work as long as common imaging-related functions, e.g. gamma-correction or even simply changing the user-viewing angle relative to the LCD monitor, are performed. The proposed scheme outperforms existing invisible watermarking methods in its capability to practically convey metadata to users of legacy display devices lacking renewal capability. On the other hand, it does not suffer from the annoying quality-degradation problem of visible watermarking schemes. Limitations and possible extensions of the proposed schemes are also addressed. We believe that many interesting new applications can be facilitated using such unseen visible watermarking schemes.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124574478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Video Modeling by Spatio-Temporal Resampling and Bayesian Fusion 基于时空重采样和贝叶斯融合的视频建模
2007 IEEE International Conference on Image Processing Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379607
Yunfei Zheng, Xin Li
{"title":"Video Modeling by Spatio-Temporal Resampling and Bayesian Fusion","authors":"Yunfei Zheng, Xin Li","doi":"10.1109/ICIP.2007.4379607","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379607","url":null,"abstract":"In this paper, we propose an empirical Bayesian approach toward video modeling and demonstrate its application in multiframe image restoration. Based on our previous work on spatio-temporall adaptive localized learning (STALL), we introduce a new concept of spatio-temporal resampling to facilitate the task of video modeling. Resampling produces a redundant representation of video signals with distributed spatio-temporal characteristics. When combined with STALL model, we show how to probabilistically combine the linear regression results of resampled video signals under a Bayesian framework. Such empirical Bayesian approach opens the door to develop a whole new class of video processing algorithms without explicit motion estimation or segmentation. The potential of our distributed video model is justified by considering its application into two multiframe image restoration tasks: repair damaged blocks and remove impulse noise.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124693196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信