{"title":"Single chip VLSI realization of a neural net for fast decision making functions","authors":"F. Stupmann, S. Rode, G. Geske","doi":"10.1109/ICONIP.2002.1198204","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198204","url":null,"abstract":"The newest results of a hardware realization of a neural net for fast decision making functions in real time are shown here. There is a digital micro core with any functions-proceeding of the learning and testing of the net, supervising of training process and computation of some calculations in pre- and post-processing. The decision-making function is a trainable integrated analog neural network structure. The circuit not only contains the reproduction path but also the learning on-chip. Learning patterns for the neural chip are provided in a memory unit. These patterns are automatically presented to the network. The process of weight change (i.e. learning) is fully integrated. The information processing speed from the input to the output of the chip is 2 /spl mu/s in the reproduction process. The number of neurons integrated in the whole chip is 100 in the input layer, 60 in the hidden layer and 10 in the output layer. The back propagation algorithm is implemented in an analog circuit.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131989991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Taguchi methods to train artificial neural networks in pattern recognition, control and evolutionary applications","authors":"G. Maxwell, C. MacLeod","doi":"10.1109/ICONIP.2002.1202182","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202182","url":null,"abstract":"Taguchi methods are commonly used to optimise industrial systems, particularly in manufacturing. We have shown that they may also be used to optimise neural network weights and therefore train the network. This paper builds on previous work and explains the application of the method to network training in several important areas, including pattern recognition, neurocontrol, evolutionary or genetic networks and nonlinear neurons. Consideration is also given to the training of networks for failure and fault control systems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132059274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Miyajima, N. Kiriki, Noritaka Shigei, S. Yatsuki
{"title":"Higher order multidirectional associative memory with decreasing energy function","authors":"H. Miyajima, N. Kiriki, Noritaka Shigei, S. Yatsuki","doi":"10.1109/ICONIP.2002.1198962","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198962","url":null,"abstract":"This paper describes higher order multidirectional associative memory (HOMAM) with decreasing energy function. From numerical simulation and static analysis, HOMAM is superior to other models, such as multidirectional associative memory (MAM) and second order multidirectional associative memory (SOMAM).","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132224991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Galushkin, A. I. Sukhoparov, V.N. Svede-Chvets, V.V. Svede-Chvets
{"title":"Base optoelectronic three-dimensional technique in aspect of neural computers build-up","authors":"A. Galushkin, A. I. Sukhoparov, V.N. Svede-Chvets, V.V. Svede-Chvets","doi":"10.1109/ICONIP.2002.1198183","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198183","url":null,"abstract":"The paper presents the optoelectronic three-dimensional technique with multi-channel optical interconnects based on standard electron technologies. This technique makes it possible to create the functional set of optoelectronic (OE) VLSI circuits. This set can be used for making different kinds processing and communication devices with mass parallelism and multichannel fiber-optic links. The main parameters of OE VLSI circuits are presented. The methods of neural accelerators and network implementation on the basis of OE VLSI circuits are proposed.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130151704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural networks in FPGAs","authors":"A. Omondi, J. Rajapakse","doi":"10.1109/ICONIP.2002.1198202","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198202","url":null,"abstract":"As FPGAs have increasingly become denser and faster, they are being utilized for many applications, including the implementation of neural networks. Ideally, FPGA implementations, being directly in hardware and having parallelism, will have performance advantages over software on conventional machines. But there is a great deal to be done to make the most of FPGAs and to prove their worth in implementing neural networks, especially in view of past failures in the implementation of neurocomputers. This paper looks at some of the relevant issues.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133923829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement learning for Order Acceptance on a shared resource","authors":"M. M. Hing, A. van Harten, P. Schuur","doi":"10.1109/ICONIP.2002.1202861","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202861","url":null,"abstract":"Order acceptance (OA) is one of the main functions in business control. Basically, OA involves for each order a reject/accept decision. Always accepting an order when capacity is available could disable the system to accept more convenient orders in the future with opportunity losses as a consequence. Another important aspect is the availability of information to the decision-maker. We use the stochastic modeling approach, Markov decision theory and learning methods from artificial intelligence to find decision policies, even under uncertain information. Reinforcement learning (RL) is a quite new approach in OA. It is capable of learning both the decision policy and incomplete information, simultaneously. It is shown here that RL works well compared with heuristics. Finding good heuristics in a complex situation is a delicate art. It is demonstrated that a RL trained agent can be used to support the detection of good heuristics.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133924632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A model of word meaning inference development in child","authors":"T. Shimotomai, T. Omori","doi":"10.1109/ICONIP.2002.1202818","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202818","url":null,"abstract":"Many researchers of infant language acquisition have reported that three-year-old infants can properly expand the meaning of a noun to a new object. However, for the verb, Imai et al. reported that infants gradually acquire a proper expansion of verb meaning from 3 years to 5 years old. In this study, we analyze the experimental data with a statistical model in order to elucidate the structure of information processing in a child's meaning inference of nouns and verbs.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134163011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A SOM-based method for feature selection","authors":"H. Ye, Hanchang Liu","doi":"10.1109/ICONIP.2002.1202830","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202830","url":null,"abstract":"This paper presents a method, called feature competitive algorithm (FCA), for feature selection, which is based on an unsupervised neural network, the self-organising map (SOM). The FCA is capable of selecting the most important features describing target concepts from a given whole set of features via the unsupervised learning. The FCA is simple to implement and fast in feature selection as the learning can be done automatically and no need for training data. A quantitative measure, called average distance distortion ratio, is figured out to assess the quality of the selected feature set. An asymptotic optimal feature set can then be determined on the basis of the assessment. This addresses an open research issue in feature selection. This method has been applied to a real case, a software document collection consisting of a set of UNIX command manual pages. The results obtained from a retrieval experiment based on this collection demonstrated some very promising potential.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"4671 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127578085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose F. M. Amaral, R. Tanscheit, M. Pacheco, M. Vellasco
{"title":"Evolutionary fuzzy system design and implementation","authors":"Jose F. M. Amaral, R. Tanscheit, M. Pacheco, M. Vellasco","doi":"10.1109/ICONIP.2002.1198998","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198998","url":null,"abstract":"This work proposes a methodology for the design of fuzzy systems based on evolutionary computation techniques. A three-stage evolutionary algorithm that uses genetic algorithms evolves the knowledge base of a fuzzy system - rule base and parameters. The evolutionary aspect makes the design more simple and efficient, especially when compared with traditional trial and error methods. The method emphasizes interpretability so that the resulting strategy is clearly stated. An evolvable hardware platform for the synthesis of analog electronic circuits is proposed. This platform, which can be used for the implementation of the designed fuzzy system, is based on a field programmable analog array. The performance of a fuzzy system in the control of both a linear and nonlinear plant is evaluated. The results obtained with these two plants show the applicability of this hybrid model in the design of fuzzy control systems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The neural basis of stereoscopic vision","authors":"R. Freeman","doi":"10.1109/ICONIP.2002.1199028","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1199028","url":null,"abstract":"Stereoscopic vision allows animals with frontally placed eyes to perceive very small differences in relative depth. A great deal of theoretical and behavioral work has been undertaken to try to understand the parameters of this process. Physiological investigations show that neurons in the visual cortex are able to encode and process stereoscopic information. We have shown that this encoding may occur by a system that assesses differences in internal structure of receptive fields of left and right eyes. We have also developed a biologically plausible model under the assumption of serial processing that accounts for most of the experimental findings. Our results demonstrate that a major mechanism for stereoscopic encoding is likely to occur via phase differences of left and right eye receptive fields.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130803308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}