{"title":"基于pi -mix序列的学习机泛化性能研究","authors":"Bin Zou, Luoqing Li","doi":"10.1109/ICPR.2006.1118","DOIUrl":null,"url":null,"abstract":"The generalization performance is the important property of learning machines. It has been shown previously by Vapnik, Cucker and Smale that, the empirical risks of learning machine based on i.i.d. sequence must uniformly converge to their expected risks as the number of samples approaches infinity. This paper extends the results to the case where the i.i.d. sequence is replaced by phi-mixing sequence. We establish the rate of uniform convergence of learning machine by using Bernstein's inequality for phi-mixing sequence, and estimate the sample error of learning machine. In the end, we compare these bounds with known results","PeriodicalId":236033,"journal":{"name":"18th International Conference on Pattern Recognition (ICPR'06)","volume":"8 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The Generalization Performance of Learning Machine Based on Phi-mixing Sequence\",\"authors\":\"Bin Zou, Luoqing Li\",\"doi\":\"10.1109/ICPR.2006.1118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The generalization performance is the important property of learning machines. It has been shown previously by Vapnik, Cucker and Smale that, the empirical risks of learning machine based on i.i.d. sequence must uniformly converge to their expected risks as the number of samples approaches infinity. This paper extends the results to the case where the i.i.d. sequence is replaced by phi-mixing sequence. We establish the rate of uniform convergence of learning machine by using Bernstein's inequality for phi-mixing sequence, and estimate the sample error of learning machine. In the end, we compare these bounds with known results\",\"PeriodicalId\":236033,\"journal\":{\"name\":\"18th International Conference on Pattern Recognition (ICPR'06)\",\"volume\":\"8 1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"18th International Conference on Pattern Recognition (ICPR'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.2006.1118\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"18th International Conference on Pattern Recognition (ICPR'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.2006.1118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Generalization Performance of Learning Machine Based on Phi-mixing Sequence
The generalization performance is the important property of learning machines. It has been shown previously by Vapnik, Cucker and Smale that, the empirical risks of learning machine based on i.i.d. sequence must uniformly converge to their expected risks as the number of samples approaches infinity. This paper extends the results to the case where the i.i.d. sequence is replaced by phi-mixing sequence. We establish the rate of uniform convergence of learning machine by using Bernstein's inequality for phi-mixing sequence, and estimate the sample error of learning machine. In the end, we compare these bounds with known results