{"title":"级联网络架构","authors":"E. Littmann, H. Ritter","doi":"10.1109/IJCNN.1992.226955","DOIUrl":null,"url":null,"abstract":"A novel incremental cascade network architecture based on error minimization is presented. The properties of this and related cascade architectures are discussed, and the influence of the objective function is investigated. The performance of the network is achieved by several layers of nonlinear units that are trained in a strictly feedforward manner and one after the other. Nonlinearity is generated by using sigmoid units and, optionally, additional powers of their activity values. Extensive benchmarking results for the XOR problem are reported, as are various classification tasks, and time series prediction. These are compared to other results reported in the literature. Direct cascading is proposed as promising approach to introducing context information in the approximation process.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"37","resultStr":"{\"title\":\"Cascade network architectures\",\"authors\":\"E. Littmann, H. Ritter\",\"doi\":\"10.1109/IJCNN.1992.226955\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel incremental cascade network architecture based on error minimization is presented. The properties of this and related cascade architectures are discussed, and the influence of the objective function is investigated. The performance of the network is achieved by several layers of nonlinear units that are trained in a strictly feedforward manner and one after the other. Nonlinearity is generated by using sigmoid units and, optionally, additional powers of their activity values. Extensive benchmarking results for the XOR problem are reported, as are various classification tasks, and time series prediction. These are compared to other results reported in the literature. Direct cascading is proposed as promising approach to introducing context information in the approximation process.<<ETX>>\",\"PeriodicalId\":286849,\"journal\":{\"name\":\"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"37\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1992.226955\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1992.226955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A novel incremental cascade network architecture based on error minimization is presented. The properties of this and related cascade architectures are discussed, and the influence of the objective function is investigated. The performance of the network is achieved by several layers of nonlinear units that are trained in a strictly feedforward manner and one after the other. Nonlinearity is generated by using sigmoid units and, optionally, additional powers of their activity values. Extensive benchmarking results for the XOR problem are reported, as are various classification tasks, and time series prediction. These are compared to other results reported in the literature. Direct cascading is proposed as promising approach to introducing context information in the approximation process.<>