Proceedings. International Conference on Image Processing最新文献

筛选
英文 中文
Image restoration under wavelet-domain priors: an expectation-maximization approach 小波域先验下的图像恢复:期望最大化方法
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038029
R. Nowak, Mário A. T. Figueiredo
{"title":"Image restoration under wavelet-domain priors: an expectation-maximization approach","authors":"R. Nowak, Mário A. T. Figueiredo","doi":"10.1109/ICIP.2002.1038029","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038029","url":null,"abstract":"This paper describes an expectation-maximization (EM) algorithm for wavelet-based image restoration (deconvolution). The observed image is assumed to be a convolved (e.g., blurred) and noisy version of the original image. Regularization is achieved by using a complexity penalty/prior in the wavelet domain, taking advantage of the well known sparsity of wavelet representations. The EM algorithm herein proposed combines the efficient image representation offered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator in the discrete Fourier domain. The algorithm alternates between an FFT-based E-step and a DWT-based M-step, resulting in a very efficient iterative process requiring O(N log N) operations per iteration (where N stands for the number of pixels). The algorithm, which also estimates the noise variance, is called WAFER, standing for wavelet and Fourier EM restoration. The conditions for convergence of the proposed algorithm are also presented.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"16 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87891199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Biased reconstruction of wavelet coefficients in JPEG2000 decoding JPEG2000解码中小波系数的偏置重构
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038951
A. Deever
{"title":"Biased reconstruction of wavelet coefficients in JPEG2000 decoding","authors":"A. Deever","doi":"10.1109/ICIP.2002.1038951","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038951","url":null,"abstract":"Lossy wavelet compression with JPEG2000 results in the loss of information through coefficient quantization. When decoding a lossy JPEG2000 compressed image, the exact original value of a quantized coefficient is unknown to the decoder, which must try to optimally assign a reconstruction value to the coefficient within the appropriate quantization interval. Typically, JPEG2000 decoders reconstruct a wavelet coefficient at the midpoint of its quantization interval. In this paper, alternative reconstruction algorithms are proposed that utilize statistics accumulated throughout decoding to improve the selection of reconstruction points. Biased reconstruction algorithms are described for zero-quantized coefficients as well as non-zero-quantized coefficients. The computational complexity of the algorithms is also analyzed. At bit rates ranging from 0.25-2 bits per pixel, the proposed techniques yield PSNR improvements on average of 0.1-0.15 dB relative to midpoint reconstruction.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"147 1","pages":"III-III"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86908792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new algorithm for target tracking using fuzzy-edge-based feature matching and robust statistic 基于模糊边缘特征匹配和鲁棒统计的目标跟踪新算法
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038089
A. Behrad, S. Motamedi, K. Madani, M. Esnaashari
{"title":"A new algorithm for target tracking using fuzzy-edge-based feature matching and robust statistic","authors":"A. Behrad, S. Motamedi, K. Madani, M. Esnaashari","doi":"10.1109/ICIP.2002.1038089","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038089","url":null,"abstract":"We present a new algorithm for real-time tracking of moving targets in terrestrial scenes using a mobile camera. We used the fuzzy edge of the target and the modified LMedS statistic for robust tracking. In this method, we first select proper feature points from the edge of the target. These feature points are then matched with points in the region of interest in the next frame using fuzzy-edge-based feature matching. Then using the modified LMedS statistic and affine transformation, a motion model is calculated for the target. Using this model the location of the target is identified in the next frame. In addition to the robust statistic, the use of reflectance information for edge detection has made our tracking algorithm reliable against high illumination changes. The tracking system is also capable of target shape recovery and therefore it can successfully track targets with varying distance from camera or while the camera is zooming.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"7 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87025187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Joint space-time image sequence segmentation based on volume competition and level sets 基于体积竞争和水平集的空时图像序列联合分割
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038088
Mirko Ristivojevic, J. Konrad
{"title":"Joint space-time image sequence segmentation based on volume competition and level sets","authors":"Mirko Ristivojevic, J. Konrad","doi":"10.1109/ICIP.2002.1038088","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038088","url":null,"abstract":"We address the issue of joint space-time segmentation of image sequences. Typical approaches to such segmentation consider two image frames at a time, and perform tracking of individual segments across time. We propose to perform this segmentation jointly over multiple frames. This leads to a 3D segmentation, i.e., a search for a volume \"carved out\" by a moving object in the (3D) image sequence domain. We pose the problem in a Bayesian framework and use the MAP criterion. Under suitable structural and segmentation/motion models we convert MAP estimation to a functional minimization. The resulting problem can be viewed as volume competition, a 3D generalization of region competition. We parameterize the unknown surface to be estimated, but rather than solving for it using an active-surface approach, we embed it into a higher-dimensional function and use the level-set methodology. We show experimental results for the simpler case of object motion against a still background although, given suitable models, the general formulation can handle complex motion too.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"10 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88340608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Efficient video similarity measurement with video signature 基于视频签名的高效视频相似度测量
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038101
S. Cheung, A. Zakhor
{"title":"Efficient video similarity measurement with video signature","authors":"S. Cheung, A. Zakhor","doi":"10.1109/ICIP.2002.1038101","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038101","url":null,"abstract":"The video signature method has previously been proposed as a technique to summarize video efficiently for visual similarity measurements (see Cheung, S.-C. and Zakhor, A., Proc. SPIE, vol.3964, p.34-6, 2000; ICIP2000, vol.1, p.85-9, 2000; ICIP2001, vol.1, p.649-52, 2001). We now develop the necessary theoretical framework to analyze this method. We define our target video similarity measure based on the fraction of similar clusters shared between two video sequences. This measure is too computationally complex to be deployed in database applications. By considering this measure geometrically on the image feature space, we find that it can be approximated by the volume of the intersection between Voronoi cells of similar clusters. In the video signature method, sampling is used to estimate this volume. By choosing an appropriate distribution to generate samples, and ranking the samples based upon their distances to the boundary between Voronoi cells, we demonstrate that our target measure can be well approximated by the video signature method. Experimental results on a large dataset of Web video and a set of MPEG-7 test sequences with artificially generated similar versions are used to demonstrate the retrieval performance of our proposed techniques.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"40 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86221937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 233
Adaptive histograms and dissimilarity measure for texture retrieval and classification 纹理检索与分类的自适应直方图与不相似度量
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1040078
F. S. Lim, W. Leow
{"title":"Adaptive histograms and dissimilarity measure for texture retrieval and classification","authors":"F. S. Lim, W. Leow","doi":"10.1109/ICIP.2002.1040078","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1040078","url":null,"abstract":"Histogram-based dissimilarity measures are extensively used for content-based image retrieval. In an earlier paper, we proposed an efficient weighted correlation dissimilarity measure for adaptive-binning color histograms. Compared to existing fixed-binning histograms and dissimilarity measures, adaptive histograms together with weighted correlation produce the best overall performance in terms of high accuracy, small number of bins, no empty bin, and efficient computation for image classification and retrieval. This paper follows up on the study of adaptive histograms by applying them to texture classification, retrieval, and clustering. Adaptive histograms are generated from the amplitude of the discrete Fourier transform of images. Extensive comparisons with well-known texture features and dissimilarity measures show that, again, adaptive histograms and weighted correlation produce good overall performance.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"19 1","pages":"II-II"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83933699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Feature-guided painterly image rendering 特征引导绘画图像渲染
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038109
Nan Li, Zhiyong Huang
{"title":"Feature-guided painterly image rendering","authors":"Nan Li, Zhiyong Huang","doi":"10.1109/ICIP.2002.1038109","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038109","url":null,"abstract":"Non-photo-realistic rendering (NPR) refers to any technique which can produce a non-photo-realistic image. We present a method for automatically generating a stroke-based painting from a digital image. The rendering process generates rectangular brush strokes with suitable location, orientation and size. Inspired by the real painting process, where a painter always observes the distinctive features and decides the shape and orientation of the stroke, we apply techniques of image moment functions and texture analysis, from which features are extracted and used to guide the stroke generation. Techniques are also developed for dynamic determination of cropped image size and edge enhancement. The main features in the source image are well preserved.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"189 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89006908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning user-specific parameters in a multibiometric system 在多生物识别系统中学习用户特定的参数
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1037958
Anil K. Jain, A. Ross
{"title":"Learning user-specific parameters in a multibiometric system","authors":"Anil K. Jain, A. Ross","doi":"10.1109/ICIP.2002.1037958","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1037958","url":null,"abstract":"Biometric systems that use a single biometric trait have to contend with noisy data, restricted degrees of freedom, failure-to-enroll problems, spoof attacks, and unacceptable error rates. Multibiometric systems that use multiple traits of an individual for authentication, alleviate some of these problems while improving verification performance. We demonstrate that the performance of multibiometric systems can be further improved by learning user-specific parameters. Two types of parameters are considered here. (i) Thresholds that are used to decide if a matching score indicates a genuine user or an impostor, and (ii) weights that are used to indicate the importance of matching scores output by each biometric trait. User-specific thresholds are computed using the cumulative histogram of impostor matching scores corresponding to each user. The user-specific weights associated with each biometric are estimated by searching for that set of weights which minimizes the total verification error. The tests were conducted on a database of 50 users who provided fingerprint, face and hand geometry data, with 10 of these users providing data over a period of two months. We observed that user-specific thresholds improved system performance by /spl sim/ 2%, while user-specific weights improved performance by /spl sim/ 3%.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"303 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89045454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 286
Power efficient H.263 video transmission over wireless channels 通过无线信道高效传输H.263视频
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1038078
X. Lu, Yao Wang, E. Erkip
{"title":"Power efficient H.263 video transmission over wireless channels","authors":"X. Lu, Yao Wang, E. Erkip","doi":"10.1109/ICIP.2002.1038078","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1038078","url":null,"abstract":"We introduce an approach for adaptive minimization of the total power consumption of wireless video communications subject to a given level of quality of service. Our approach exploits tradeoffs between the power consumption of the H.263 encoder, the Reed-Solomon channel encoder and the transmitter. Simulation results show that source and channel coding parameters and transmit energy per bit should vary based on channel conditions. Optimized settings can reduce the total power consumption by a significant factor compared to fixed parameter settings which do not match with the channel conditions.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"10 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80554128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
A compressed domain video object segmentation system 一个压缩域视频对象分割系统
Proceedings. International Conference on Image Processing Pub Date : 2002-12-10 DOI: 10.1109/ICIP.2002.1037972
M. Hayes, M. Jamrozik
{"title":"A compressed domain video object segmentation system","authors":"M. Hayes, M. Jamrozik","doi":"10.1109/ICIP.2002.1037972","DOIUrl":"https://doi.org/10.1109/ICIP.2002.1037972","url":null,"abstract":"A fast means of object segmentation in a video sequence using low-level features existing in the compressed video stream is presented. These features include the DCT coefficient values of I-frames and motion vectors. The work described here is the foundation of a spatial segmentation system that approaches real time. Potential applications for the system include the separation of foreground and background objects and video database searching.","PeriodicalId":74572,"journal":{"name":"Proceedings. International Conference on Image Processing","volume":"12 1","pages":"I-I"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81051543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信