{"title":"源编码定理中的收敛速度,经验量化器设计,以及通用有损源编码","authors":"T. Linder, G. Lugosi, K. Zeger","doi":"10.1109/ISIT.1994.395069","DOIUrl":null,"url":null,"abstract":"Rates of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical MSE with respect to m training vectors, then its MSE for the true source converges almost surely to the minimum possible MSE as O(/spl radic/(log m/m)); (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)); (3) There exists a fixed rate universal lossy source coding scheme whose per letter MSE on n real valued source samples converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)); and (4) Consider a training set of n real valued source samples blocked into vectors of dimension k, and a k-dimensional vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the MSE of this quantizer for the true source converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)), if one chooses k=[1/R(1-/spl epsiv/)(log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1).<<ETX>>","PeriodicalId":331390,"journal":{"name":"Proceedings of 1994 IEEE International Symposium on Information Theory","volume":"1667 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"130","resultStr":"{\"title\":\"Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding\",\"authors\":\"T. Linder, G. Lugosi, K. Zeger\",\"doi\":\"10.1109/ISIT.1994.395069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rates of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical MSE with respect to m training vectors, then its MSE for the true source converges almost surely to the minimum possible MSE as O(/spl radic/(log m/m)); (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)); (3) There exists a fixed rate universal lossy source coding scheme whose per letter MSE on n real valued source samples converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)); and (4) Consider a training set of n real valued source samples blocked into vectors of dimension k, and a k-dimensional vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the MSE of this quantizer for the true source converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)), if one chooses k=[1/R(1-/spl epsiv/)(log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1).<<ETX>>\",\"PeriodicalId\":331390,\"journal\":{\"name\":\"Proceedings of 1994 IEEE International Symposium on Information Theory\",\"volume\":\"1667 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"130\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1994 IEEE International Symposium on Information Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT.1994.395069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Symposium on Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.1994.395069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding
Rates of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical MSE with respect to m training vectors, then its MSE for the true source converges almost surely to the minimum possible MSE as O(/spl radic/(log m/m)); (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)); (3) There exists a fixed rate universal lossy source coding scheme whose per letter MSE on n real valued source samples converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)); and (4) Consider a training set of n real valued source samples blocked into vectors of dimension k, and a k-dimensional vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the MSE of this quantizer for the true source converges almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n)), if one chooses k=[1/R(1-/spl epsiv/)(log n)] /spl forall//spl epsiv/ /spl epsiv/(0,1).<>