{"title":"Multi-scale dynamic neural net architectures","authors":"L. Atlas, R. Marks, M. Donnell, J. Taylor","doi":"10.1109/PACRIM.1989.48413","DOIUrl":null,"url":null,"abstract":"The design of specialized trainable neural network architectures for temporal problems is described. Multilayer extensions of previous dynamic neural net architectures are considered. Two of the key attributes of these architectures are smoothing and decimation between layers. An analysis of parameters (weights) to estimate suggests a massive reduction in training data needed for multiscale topologies for networks with large temporal input windows. The standard back-propagation training rules are modified to allow for smoothing between layers, and preliminary simulation results for these new rules are encouraging. For example, a binary problem with an input of size 32 converged in three iterations with smoothing and never converged when there was no smoothing.<<ETX>>","PeriodicalId":256287,"journal":{"name":"Conference Proceeding IEEE Pacific Rim Conference on Communications, Computers and Signal Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference Proceeding IEEE Pacific Rim Conference on Communications, Computers and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PACRIM.1989.48413","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The design of specialized trainable neural network architectures for temporal problems is described. Multilayer extensions of previous dynamic neural net architectures are considered. Two of the key attributes of these architectures are smoothing and decimation between layers. An analysis of parameters (weights) to estimate suggests a massive reduction in training data needed for multiscale topologies for networks with large temporal input windows. The standard back-propagation training rules are modified to allow for smoothing between layers, and preliminary simulation results for these new rules are encouraging. For example, a binary problem with an input of size 32 converged in three iterations with smoothing and never converged when there was no smoothing.<>