{"title":"矢量量化器的训练失真问题","authors":"T. Linder","doi":"10.1109/ISIT.2000.866443","DOIUrl":null,"url":null,"abstract":"The in-training-set performance of a vector quantizer as a function of its training set size is investigated. For squared error distortion and independent training data, worst-case type upper bounds are derived on the minimum training distortion achieved by an empirically optimal quantizer. These bounds show that the training distortion can underestimate the minimum distortion of a truly optimal quantizer by as much as a constant times n/sup -1/2/, where n is the size of the training data. Earlier results provide lower bounds of the same order.","PeriodicalId":108752,"journal":{"name":"2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"38","resultStr":"{\"title\":\"On the training distortion of vector quantizers\",\"authors\":\"T. Linder\",\"doi\":\"10.1109/ISIT.2000.866443\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The in-training-set performance of a vector quantizer as a function of its training set size is investigated. For squared error distortion and independent training data, worst-case type upper bounds are derived on the minimum training distortion achieved by an empirically optimal quantizer. These bounds show that the training distortion can underestimate the minimum distortion of a truly optimal quantizer by as much as a constant times n/sup -1/2/, where n is the size of the training data. Earlier results provide lower bounds of the same order.\",\"PeriodicalId\":108752,\"journal\":{\"name\":\"2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"38\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT.2000.866443\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2000.866443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The in-training-set performance of a vector quantizer as a function of its training set size is investigated. For squared error distortion and independent training data, worst-case type upper bounds are derived on the minimum training distortion achieved by an empirically optimal quantizer. These bounds show that the training distortion can underestimate the minimum distortion of a truly optimal quantizer by as much as a constant times n/sup -1/2/, where n is the size of the training data. Earlier results provide lower bounds of the same order.