Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)最新文献

筛选
英文 中文
Redesigning of JPEG statistical model in the lossy mode fitting distribution of DCT coefficients 有损模式拟合DCT系数分布中JPEG统计模型的重新设计
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899583
Y. Kuroki, Yoshifumi Ueshige, T. Ohta
{"title":"Redesigning of JPEG statistical model in the lossy mode fitting distribution of DCT coefficients","authors":"Y. Kuroki, Yoshifumi Ueshige, T. Ohta","doi":"10.1109/ICIP.2000.899583","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899583","url":null,"abstract":"The JPEG statistical models in the lossy mode specify the procedures for converting the discrete cosine transform (DCT) coefficients into binary strings and context modeling in the case where the binary arithmetic coder called the QM-coder is employed as an entropy coder. The JPEG lossy mode establishes two statistical models, one for prediction residuals of the DC coefficients and the other for the AC coefficients. We redesign these two models by taking account of their distribution. We confirm that the Laplacian distribution is appropriate for both the DC coefficients and the AC coefficients through the Kolmogorov-Smirnov (KS) test; consequently, we propose statistical models that fit the Laplacian distribution. By adopting the proposed statistical models in lieu of the conventional models, the number of the states decreases from 294 to 210 and the compression performance on several test images including super high definition images improves by 0.01 to 1.48%.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131791555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improved compression by coupling of coding techniques and redundant transform 通过编码技术和冗余变换的耦合改进压缩
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899591
A. Poussard, C. Olivier, J. H. Wu, C. Chatellier
{"title":"Improved compression by coupling of coding techniques and redundant transform","authors":"A. Poussard, C. Olivier, J. H. Wu, C. Chatellier","doi":"10.1109/ICIP.2000.899591","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899591","url":null,"abstract":"The techniques commonly used in image coding (JPEG, MPEG, ...) have as the main objective to compress as much as possible while retaining most of the information. These methods are often based on the use of the discrete cosine transform (DCT) and the wavelet transform (WT). Our purpose is to consider the necessary redundancy to achieve a good reception in the case of heavy interruptions of bits transmission. There is however a contradiction between an optimal compression and the redundancy required. It is thus necessary to master and compress the information to be transmitted as much as possible and to withstand the noise on the transmission channel. This article makes a contribution to this difficult problem: its originality resides in the coupling of orthogonal transforms and a redundant transform. Simulation results are provided using our method and the results are compared with that of DCT and WT based methods for Lena and Mountain images.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132979946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Curve evolution, boundary-value stochastic processes, the Mumford-Shah problem, and missing data applications 曲线演化,边值随机过程,Mumford-Shah问题,以及缺失数据应用
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899521
A. Tsai, A. Yezzi, A. Willsky
{"title":"Curve evolution, boundary-value stochastic processes, the Mumford-Shah problem, and missing data applications","authors":"A. Tsai, A. Yezzi, A. Willsky","doi":"10.1109/ICIP.2000.899521","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899521","url":null,"abstract":"We present an estimation-theoretic approach to curve evolution for the Mumford-Shah problem. By viewing an active contour as the set of discontinuities in the Mumford-Shah problem, we may use the corresponding functional to determine gradient descent evolution equations to deform the active contour. In each gradient descent step, we solve a corresponding optimal estimation problem, connecting the Mumford-Shah functional and curve evolution with the theory of boundary-value stochastic processes. In employing the Mumford-Shah functional, our active contour model inherits its attractive ability to generate, in a coupled manner, both a smooth reconstruction and a segmentation of the image. Next, by generalizing the data fidelity term of the original Mumford-Shah functional to incorporate a spatially varying penalty, we extend our method to problems in which data quality varies across the image and to images in which sets of pixel measurements are missing. This more general model leads us to a novel PDE-based approach for simultaneous image magnification, segmentation, and smoothing, thereby extending the traditional applications of the Mumford-Shah functional which only considers simultaneous segmentation and smoothing.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130769079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A computational image sensor with pixel-based integration time control 基于像素积分时间控制的计算图像传感器
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.901062
T. Hamamoto, K. Aizawa
{"title":"A computational image sensor with pixel-based integration time control","authors":"T. Hamamoto, K. Aizawa","doi":"10.1109/ICIP.2000.901062","DOIUrl":"https://doi.org/10.1109/ICIP.2000.901062","url":null,"abstract":"We have been investigating a computational sensor which controls the integration time of every pixel independently. Because the integration time is controlled, higher temporal resolution and wider dynamic range can be achieved. We present a new adaptive integration time image sensor which has 128/spl times/64 pixels. We adopt a column parallel architecture to design the prototype chip. The scheme to control integration time is extended, and pixel pitch, processing speed and power consumption are much improved in comparison with our previous prototype. We show some experimental results obtained with the prototype.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130751158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Image denoising using wavelet thresholding and model selection 基于小波阈值和模型选择的图像去噪方法
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899345
Shi Zhong, V. Cherkassky
{"title":"Image denoising using wavelet thresholding and model selection","authors":"Shi Zhong, V. Cherkassky","doi":"10.1109/ICIP.2000.899345","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899345","url":null,"abstract":"This paper describes wavelet thresholding for image denoising under the framework provided by statistical learning theory a.k.a. Vapnik-Chervonenkis (VC) theory. Under the framework of VC-theory, wavelet thresholding amounts to ordering of wavelet coefficients according to their relevance to accurate function estimation, followed by discarding insignificant coefficients. Existing wavelet thresholding methods specify an ordering based on the coefficient magnitude, and use threshold(s) derived under Gaussian noise assumption and asymptotic settings. In contrast, the proposed approach uses orderings better reflecting the statistical properties of natural images, and VC-based thresholding developed for finite sample settings under very general noise assumptions. A tree structure is proposed to order the wavelet coefficients based on its magnitude, scale and spatial location. The choice of a threshold is based on the general VC method for model complexity control. Empirical results show that the proposed method outperforms Donoho's (1992, 1995) level dependent thresholding techniques and the advantages become more significant under finite sample and non-Gaussian noise settings.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133058576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Mean field annealing EM for image segmentation 图像分割的平均场退火EM
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899511
Wanhyun Cho, Soohyung Kim, Soonyoung Park, Jonghyun Park
{"title":"Mean field annealing EM for image segmentation","authors":"Wanhyun Cho, Soohyung Kim, Soonyoung Park, Jonghyun Park","doi":"10.1109/ICIP.2000.899511","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899511","url":null,"abstract":"We present a statistical model-based approach to the color image segmentation. A novel deterministic annealing expectation-maximization (EM) and mean field theory are used to estimate the posterior probability of each pixel and the parameters of the Gaussian mixture model which represents the multi-colored objects statistically. Image segmentation is carried out by clustering each pixel into the most probable component Gaussian. The experimental results show that the mean field annealing EM provides a global optimal solution for the maximum likelihood parameter estimation and the real images are segmented efficiently using the estimates computed by the maximum entropy principle and mean field theory.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132428445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Informed embedding: exploiting image and detector information during watermark insertion 信息嵌入:在水印嵌入过程中利用图像和检测器的信息
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899260
Matthew L. Miller, I. Cox, J. Bloom
{"title":"Informed embedding: exploiting image and detector information during watermark insertion","authors":"Matthew L. Miller, I. Cox, J. Bloom","doi":"10.1109/ICIP.2000.899260","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899260","url":null,"abstract":"Usually watermark embedding simply adds a globally or locally attenuated watermark pattern to the cover data (photograph, music, movie). The attenuation is required to maintain fidelity of the cover data to an observer while the watermark detector considers the cover data to be \"noise\". We refer to this as blind embedding. Cox, Miller and McKellips (see Proceedings of the IEEE, vol.87, no.7, p.1127-41, 1999) observed that the cover data is not noise, i.e. it is not random but completely known at the time of embedding. This knowledge, along with knowledge of the detection algorithm to be used, allows a new category of informed embedder to be realized. We describe a simple watermarking algorithm and then compare the performance of blind embedding with three types of informed embedding. Note that in all four cases, the watermark detector is unchanged, only the embedder is altered. Experimental results clearly reveal the improvement of informed over blind embedding.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132665967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
A recursive soft-decision PSF and neural network approach to adaptive blind image regularization 一种递归软判决PSF和神经网络自适应盲图像正则化方法
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899580
Kim-Hui Yap, L. Guan
{"title":"A recursive soft-decision PSF and neural network approach to adaptive blind image regularization","authors":"Kim-Hui Yap, L. Guan","doi":"10.1109/ICIP.2000.899580","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899580","url":null,"abstract":"We present a new approach to adaptive blind image regularization based on a neural network and soft-decision blur identification. We formulate blind image deconvolution into a recursive scheme by projecting and optimizing a novel cost function with respect to its image and blur subspaces. The new algorithm provides a continual blur adaptation towards the best-fit parametric structure throughout the restoration. It integrates the knowledge of real-life blur structures without compromising its flexibility in restoring images degraded by other nonstandard blurs. A nested neural network, called the hierarchical cluster model is employed to provide an adaptive, perception-based restoration. On the other hand, conjugate gradient optimization is adopted to identify the blur. Experimental results show that the new approach is effective in restoring the degraded image without the prior knowledge of the blur.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Steganography for a low bit-rate wavelet based image coder 基于小波的低比特率图像编码器的隐写
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.901029
S. Areepongsa, Y. Syed, N. Kaewkamnerd, K. Rao
{"title":"Steganography for a low bit-rate wavelet based image coder","authors":"S. Areepongsa, Y. Syed, N. Kaewkamnerd, K. Rao","doi":"10.1109/ICIP.2000.901029","DOIUrl":"https://doi.org/10.1109/ICIP.2000.901029","url":null,"abstract":"This approach inserts a hidden steganographic message into a base layer transmission of a zerotree based wavelet coder. The message is hidden in sign/bit values of insignificant children of the detail subbands in nonsmooth regions of the image. The HC-RTOT coder is used to determine what regions of the image that the message can be embedded in. The coder also determines which wavelet coefficients in the detail subbands of these regions are used for messaging by the use of a steganographic mask which can be unique for each transmission occurrence of the image. The advantage of this approach is the ability to send steganographic messages in lossy environments with a robustness against detection or attack. Preliminary results indicate that the message can be sent with a 4%-10% overhead capacity of the base layer bitstream (0.04-0.1 bpp) depending on several variations of the method.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"327 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133096941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Switched error concealment and robust coding decisions in scalable video coding 可扩展视频编码中的切换错误隐藏和鲁棒编码决策
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101) Pub Date : 2000-09-10 DOI: 10.1109/ICIP.2000.899413
Rui Zhang, S. Regunathan, K. Rose
{"title":"Switched error concealment and robust coding decisions in scalable video coding","authors":"Rui Zhang, S. Regunathan, K. Rose","doi":"10.1109/ICIP.2000.899413","DOIUrl":"https://doi.org/10.1109/ICIP.2000.899413","url":null,"abstract":"This work introduces two complementary techniques to improve the packet loss resilience of scalable video coding systems. First, a \"switch per-pixel\" error concealment (SPEC) scheme is proposed, which allows the decoder to exploit information from both the current base layer and previous enhancement-layer frame for the reconstruction of missing enhancement-layer blocks. Based on the packet loss history and the quantized base-layer data, the algorithm switches per pixel between the two information sources. SPEC is shown to consistently outperform standard concealment methods. The second main contribution is concerned with encoder decision optimization. Enhancement layer prediction modes are selected so as to minimize the overall decoder reconstruction distortion, which is due to quantization, packet loss and error propagation. The distortion computation uses a recursive optimal per-pixel estimate (ROPE) to accurately account for the effects of error concealment as well as spatial and temporal error propagation. Simulation results show that ROPE-based mode selection substantially outperforms conventional prediction mode selection schemes. Finally, the combination of SPEC at the decoder and ROPE-based mode selection at the encoder is shown to achieve significant additional performance gains.","PeriodicalId":193198,"journal":{"name":"Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131846130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信