{"title":"Ensemble methods for handwritten digit recognition","authors":"Lars Kai Hansen, C. Liisberg, P. Salamon","doi":"10.1109/NNSP.1992.253679","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253679","url":null,"abstract":"Neural network ensembles are applied to handwritten digit recognition. The individual networks of the ensemble are combinations of sparse look-up tables (LUTs) with random receptive fields. It is shown that the consensus of a group of networks outperforms the best individual of the ensemble. It is further shown that it is possible to estimate the ensemble performance as well as the learning curve on a medium-size database. In addition the authors present preliminary analysis of experiments on a large database and show that state-of-the-art performance can be obtained using the ensemble approach by optimizing the receptive fields. It is concluded that it is possible to improve performance significantly by introducing moderate-size ensembles; in particular, a 20-25% improvement has been found. The ensemble random LUTs, when trained on a medium-size database, reach a performance (without rejects) of 94% correct classification on digits written by an independent group of people.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Globally trained neural network architecture for image compression","authors":"L. Schweizer, G. Parladori, G. L. Sicuranza","doi":"10.1109/NNSP.1992.253684","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253684","url":null,"abstract":"The authors discuss the development of a coding system for image transmission based on block-transform coding and vector quantization. Moreover, a classification of the image blocks is performed in the spatial domain. An architecture incorporating both multilayered perceptron and self-organizing feature map neural networks and a block classification is considered to realize the image coding scheme. A framework is proposed to globally train the whole image coding system. The achieved results confirm the merits of such an image coding scheme. The neural network integration is performed with a single learning phase, allowing faster training and better performance of the image coding system.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133751252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James R. E. Moxham, P. A. Jones, Hugh J. McDermott, G. Clark
{"title":"A new algorithm for voicing detection and voice pitch estimation based on the neocognitron","authors":"James R. E. Moxham, P. A. Jones, Hugh J. McDermott, G. Clark","doi":"10.1109/NNSP.1992.253692","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253692","url":null,"abstract":"One of the more widely used cochlear implants is the Nucleus multielectrode implant, developed by the University of Melbourne and Cochlear Pty Ltd. The speech processor used with this implant is the MSP, programmed with the multipeak strategy. This device incorporates circuits to estimate the fundamental frequency (F0) of speech signals and to decide whether voicing is present. The authors describe a new F0 estimator and voicing detection algorithm based on the neocognitron. Performance was compared with that of three other F0 estimation algorithms: linear predictive coding, cepstral analysis, and the algorithm used in the Multipeak-MSP processor. For the speech samples tested, the neocognitron performed more reliably than the other three systems.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134068869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real time CCD-based neural network system for pattern recognition applications","authors":"A. Chiang","doi":"10.1109/NNSP.1992.253651","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253651","url":null,"abstract":"A generic NNC (neural network classifier) capable of providing 1.9 billion programmable connections per second is described. Applications for these generic processors include image and speech recognition as well as sonar signal identification. To demonstrate the modularity and flexibility of the CCD (charge coupled device) NNCs, two generic multilayer system-level boards capable of both feedforward and feedback nets are presented. The boards demonstrate multiple LL NN chips in an adaptable, reconfigurable, expandable multipurpose system design. Although only two examples are demonstrated, the extension to larger and more complicated networks using multiple NN devices as building blocks is straightforward.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"364 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115440815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discrete neural networks and fingerprint identification","authors":"S. Sjogaard","doi":"10.1109/NNSP.1992.253681","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253681","url":null,"abstract":"The author has developed a general method for discretization of feedforward neural networks and has empirically demonstrated the usefulness of the method by successfully applying it to the nontrivial task of fingerprint identification. Surprisingly, the discrete neural network (DNN) developed in this way demanded just 4 b for the table representation of the sigmoid function, and only 6 b for the representation of the matching discrete solution. It is clearly shown that there is no significant difference in the performance on the test set between the real neural network and the DNN. Thus, it is concluded that the discretization methods proposed have shown themselves to be realistic.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constructing neural networks for contact tracking","authors":"C. DeAngelis, R. Green","doi":"10.1109/NNSP.1992.253656","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253656","url":null,"abstract":"A neural network approach for contact state estimation is presented. This neural network, NICE (neurally inspired contact estimation), has been constructed to directly embody the major problem domain constraint of uniform contact velocity and heading. NICE networks are constructed, not trained, to estimate contact position and motion from angle-of-arrival (AOA) measurements. The major advantages of the NICE system over existing methods are execution speed, an assessment of solution sensitivity, and the potential for sensor fusion. This system offers a number of attractive features. Foremost, a bearing line constrains the locus of points where a contact might be at a given time. Furthermore, different AOA sensors merely produce different loci; all are equivalent and can be fused using the system. Intermittent data can be accommodated by configuring correlation neurons to ignore the missing data, and the geographical grid resolutions can be varied to adjust to the quality of the sensor readings. In addition, the neural network can be executed in a highly parallel manner, taking advantage of the state-of-the-art parallel hardware.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124805866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dispersive networks for nonlinear adaptive filters","authors":"S. P. Day, M. Davenport, D. Camporese","doi":"10.1109/NNSP.1992.253658","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253658","url":null,"abstract":"The authors describe a dispersive network architecture that can be used for nonlinear adaptive channel equalization and signal prediction. Dispersive networks contain internal delay elements that spread out features in the input signal over time and space, so that they influence the output at multiple points in the future. When used for equalization, these networks can compensate for nonlinear channel distortions and achieve a lower error than conventional backpropagation networks of comparable size. In a signal prediction task, dispersive networks can adapt and predict simultaneously in an online environment, while conventional backpropagation networks require additional hardware.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128838428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An electronic parallel neural CAM for decoding","authors":"J. Alspector, A. Jayakumar, B. Ngo","doi":"10.1109/NNSP.1992.253654","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253654","url":null,"abstract":"The authors report measurements taken on an electronic neural system configured for content addressable memory (CAM) using a high-capacity architecture. It is shown that Boltzmann and mean-field learning networks can be implemented in a parallel, analog VLSI system. This system was used to perform experiments with mean-field CAM. The hardware settles on a stored codeword in about 10 mu s roughly independent of code length. The capacity is far higher than that of the standard Hopfield architecture.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125319632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Text-independent talker identification system combining connectionist and conventional models","authors":"Younès Bennani","doi":"10.1109/NNSP.1992.253700","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253700","url":null,"abstract":"Several techniques have been used for speaker identification which have different characteristics and capabilities. The respective merits of three different systems respectively employing neural networks, hidden Markov models, and multivariate autoregressive models are compared. A novel text-independent speaker identification system based on the cooperation of these different techniques is presented. This system outperforms previous models and can handle a large number of speakers. It is argued that modular architectures present significant advantages, such as their learning speed, their generalization and representation capabilities, and their ability to satisfy constraints imposed by hardware limitations.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"39 992 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124523370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A system identification perspective on neural nets","authors":"L. Ljung, J. Sjoberg","doi":"10.1109/NNSP.1992.253670","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253670","url":null,"abstract":"The authors review some of the basic system identification machinery to reveal connections with neural networks. In particular, they point to the role of regularization in dealing with model structures with many parameters, and show the links to overtraining in neural nets. Some provisional explanations for the success of neural nets are also offered.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}