源编码定理中的收敛速度,经验量化器设计,以及通用有损源编码

T. Linder, G. Lugosi, K. Zeger
{"title":"源编码定理中的收敛速度,经验量化器设计,以及通用有损源编码","authors":"T. Linder, G. Lugosi, K. Zeger","doi":"10.1109/ISIT.1994.395069","DOIUrl":null,"url":null,"abstract":"Rates of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical MSE with respect to m training vectors, then its MSE for the true source converges almost surely to the minimum possible MSE as O(/spl radic/(log m/m)); (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)); (3) There exists a fixed rate universal lossy source coding scheme whose per letter MSE on n real valued source samples converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)); and (4) Consider a training set of n real valued source samples blocked into vectors of dimension k, and a k-dimensional vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the MSE of this quantizer for the true source converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)), if one chooses k=[1/R(1-/spl epsiv/)(log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1).<<ETX>>","PeriodicalId":331390,"journal":{"name":"Proceedings of 1994 IEEE International Symposium on Information Theory","volume":"1667 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"130","resultStr":"{\"title\":\"Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding\",\"authors\":\"T. Linder, G. Lugosi, K. Zeger\",\"doi\":\"10.1109/ISIT.1994.395069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rates of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical MSE with respect to m training vectors, then its MSE for the true source converges almost surely to the minimum possible MSE as O(/spl radic/(log m/m)); (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)); (3) There exists a fixed rate universal lossy source coding scheme whose per letter MSE on n real valued source samples converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)); and (4) Consider a training set of n real valued source samples blocked into vectors of dimension k, and a k-dimensional vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the MSE of this quantizer for the true source converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)), if one chooses k=[1/R(1-/spl epsiv/)(log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1).<<ETX>>\",\"PeriodicalId\":331390,\"journal\":{\"name\":\"Proceedings of 1994 IEEE International Symposium on Information Theory\",\"volume\":\"1667 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"130\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1994 IEEE International Symposium on Information Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT.1994.395069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Symposium on Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.1994.395069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 130

摘要

建立了矢量量化的收敛速率结果。给出了向量维数增加和/或训练集大小增加时的收敛率。特别是,以下结果显示了在传输速率r下具有有限支持的无记忆实值源(1)如果设计一个固定维数k的矢量量化器来最小化相对于m个训练向量的经验MSE,那么它对于真实源的MSE几乎肯定收敛到最小可能的MSE为O(/spl径向/(log m/m));(2)对于真源,最优k维矢量量化器的MSE随着维数的增加收敛到失真率函数D(R)为0 (/spl radial /(log k/k));(3)存在一种固定速率的通用有损源编码方案,其在n个实值源样本上的每字母MSE几乎肯定地收敛于失真率函数D(R)为O(/spl radial /(log log n/log n));(4)考虑一个由n个实值源样本组成的训练集,这些样本被分割成k维向量,以及一个k维向量量化器,该量化器旨在最小化相对于m=[n/k]个训练向量的经验均方差。如果选择k=[1/R(1-/spl epsiv/)(log n/log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1)],则该量化器的MSE几乎肯定收敛于失真率函数D(R)为O(/spl radig /(log log n/log n))。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
Rates of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical MSE with respect to m training vectors, then its MSE for the true source converges almost surely to the minimum possible MSE as O(/spl radic/(log m/m)); (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)); (3) There exists a fixed rate universal lossy source coding scheme whose per letter MSE on n real valued source samples converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)); and (4) Consider a training set of n real valued source samples blocked into vectors of dimension k, and a k-dimensional vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the MSE of this quantizer for the true source converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)), if one chooses k=[1/R(1-/spl epsiv/)(log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1).<>
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信