{"title":"PCA网络的舍入误差分析","authors":"T. Szabó, G. Horváth","doi":"10.1109/IMTC.1997.603954","DOIUrl":null,"url":null,"abstract":"This paper deals with some of the effects of finite precision data representation and arithmetics in principal component analysis (PCA) neural networks. The PCA networks are single layer linear neural networks that use some versions of Oja's learning rule. The paper concentrates on the effects of premature convergence or early termination of the learning process. It determines an approximate analytical expression of the lower limit of the learning rate parameter. Selecting the learning rate below this limit-which depends on the statistical properties of the input data and the quantum size used in the finite precision arithmetics the convergence will slow down significantly or the learning process will stop before converging to the proper weight vector.","PeriodicalId":124893,"journal":{"name":"IEEE Instrumentation and Measurement Technology Conference Sensing, Processing, Networking. IMTC Proceedings","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Roundoff error analysis of the PCA networks\",\"authors\":\"T. Szabó, G. Horváth\",\"doi\":\"10.1109/IMTC.1997.603954\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper deals with some of the effects of finite precision data representation and arithmetics in principal component analysis (PCA) neural networks. The PCA networks are single layer linear neural networks that use some versions of Oja's learning rule. The paper concentrates on the effects of premature convergence or early termination of the learning process. It determines an approximate analytical expression of the lower limit of the learning rate parameter. Selecting the learning rate below this limit-which depends on the statistical properties of the input data and the quantum size used in the finite precision arithmetics the convergence will slow down significantly or the learning process will stop before converging to the proper weight vector.\",\"PeriodicalId\":124893,\"journal\":{\"name\":\"IEEE Instrumentation and Measurement Technology Conference Sensing, Processing, Networking. IMTC Proceedings\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1997-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Instrumentation and Measurement Technology Conference Sensing, Processing, Networking. IMTC Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IMTC.1997.603954\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Instrumentation and Measurement Technology Conference Sensing, Processing, Networking. IMTC Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMTC.1997.603954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper deals with some of the effects of finite precision data representation and arithmetics in principal component analysis (PCA) neural networks. The PCA networks are single layer linear neural networks that use some versions of Oja's learning rule. The paper concentrates on the effects of premature convergence or early termination of the learning process. It determines an approximate analytical expression of the lower limit of the learning rate parameter. Selecting the learning rate below this limit-which depends on the statistical properties of the input data and the quantum size used in the finite precision arithmetics the convergence will slow down significantly or the learning process will stop before converging to the proper weight vector.