Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.最新文献

筛选
英文 中文
Extensions of Lagrange programming neural network for satisfiability problem and its several variations 拉格朗日规划神经网络在可满足性问题上的扩展及其几种变体
M. Nagamatu, T. Nakano, N. Hamada, T. Kido, T. Akahoshi
{"title":"Extensions of Lagrange programming neural network for satisfiability problem and its several variations","authors":"M. Nagamatu, T. Nakano, N. Hamada, T. Kido, T. Akahoshi","doi":"10.1109/ICONIP.2002.1198980","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198980","url":null,"abstract":"The satisfiability problem (SAT) of the propositional calculus is a well-known NP-complete problem. It requires exponential computation time as the problem size increases. We proposed a neural network, called LPPH, for the SAT. The equilibrium point of the dynamics of the LPPH exactly corresponds to the solution of the SAT, and the dynamics does not stop at any point that is not the solution of the SAT. Experimental results show the effectiveness of the LPPH for solving the SAT. In this paper we extend the dynamics of the LPPH to solve several variations of the SAT, such as, the SAT with an objective function, the SAT with a preliminary solution, and the MAX-SAT. The effectiveness of the extensions is shown by the experiments.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115483981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Increasing the topological quality of Kohonen's self organising map by using a hit term 通过使用命中项提高Kohonen自组织映射的拓扑质量
E. Germen
{"title":"Increasing the topological quality of Kohonen's self organising map by using a hit term","authors":"E. Germen","doi":"10.1109/ICONIP.2002.1198197","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198197","url":null,"abstract":"The quality of the topology obtained at the end of the training period of Kohonen's self organizing map (SOM) is highly dependent on the learning rate and neighborhood function that are chosen at the beginning. The conventional approaches to determine those parameters do not account for the data statistics and the topological characterization of the neurons. The paper proposes a new parameter, which depends on the hit ratio among the updated neuron and the best matching neuron. It has been shown that by using this parameter with the conventional learning rate and neighborhood functions, much more adequate solution can be obtained since it deserves an information about data statistics during adaptation process.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115700587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
K-Means Fast Learning Artificial Neural Network, an alternative network for classification K-Means快速学习人工神经网络,一种用于分类的替代网络
A. Phuan, S. Prakash
{"title":"K-Means Fast Learning Artificial Neural Network, an alternative network for classification","authors":"A. Phuan, S. Prakash","doi":"10.1109/ICONIP.2002.1198196","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198196","url":null,"abstract":"The K-Means Fast Learning Artificial Neural Network (K-FLANN) is an improvement of the original FLANN II (Tay and Evans, 1994). While FLANN II develops inconsistencies in clustering, influenced by data arrangements, K-FLANN bolsters this issue, through relocation of the clustered centroids. Results of the investigation are presented along with a discussion of the fundamental behavior of K-FLANN. Comparisons are made with the K-Means Clustering algorithm and the Kohonen SOM. A further discussion is provided on how K-FLANN can qualify as an alternative method for fast classification.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"413 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
MR brain image segmentation by adaptive mixture distribution 基于自适应混合分布的MR脑图像分割
Juin-Der Lee, P. Cheng, M. Liou
{"title":"MR brain image segmentation by adaptive mixture distribution","authors":"Juin-Der Lee, P. Cheng, M. Liou","doi":"10.1109/ICONIP.2002.1202163","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202163","url":null,"abstract":"The Box-Cox transformation is applied to fit a Gaussian mixture distribution to the brain image intensity data. The advantage of using such data-adaptive mixture model is evidenced by yielding better image segmentation results compared to the existing EM procedures using standard Gaussian mixture distribution.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124455976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A dynamic neural network model on global-to-local interaction over time course 全局到局部相互作用的动态神经网络模型
Kangwoo Lee, Jianfeng Feng, H. Buxton
{"title":"A dynamic neural network model on global-to-local interaction over time course","authors":"Kangwoo Lee, Jianfeng Feng, H. Buxton","doi":"10.1109/ICONIP.2002.1202819","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202819","url":null,"abstract":"We propose a neural network model based on contextual learning and non-leaky integrate-and-fire (IF) model. The model shows dynamic properties that integrate the inputs from its own module as well as the other module over time. Moreover, the integration of inputs from different modules is not simple accumulation of activation over the time course but depends on the interaction between primary input that the behaviour of a modular network should be based on, and the contextual input that facilitates or interferes with the performance of the modular network. The learning rule is derived under the assumption that time scale of the interval to first spike can be adjusted during the learning process. The model is applied to explain global-to-local processing of Navon type stimuli in which a global letter hierarchically consists of local letters. The model provides interesting insights that may underlie asymmetric response of global and local interaction found in many psychophysical and neuropsychological studies.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121829059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neural network methods for radar processing 雷达处理的神经网络方法
A. L. Tatuzov
{"title":"Neural network methods for radar processing","authors":"A. L. Tatuzov","doi":"10.1109/ICONIP.2002.1198969","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198969","url":null,"abstract":"There are significant difficulties in radar automatic data processing arising from poor flexibility of known algorithms and low computational capacity of traditional computer devices. Neural networks can help the radar designer to overcome these difficulties as a result of computational power of neural parallel hardware and adaptive capabilities of neural algorithms. The idea of neural net application in the most difficult radar problems is proposed and analyzed. Some examples of neural methods for radar information processing are proposed and discussed: phase array antenna weights adaptation, genetic algorithms for optimization of multibased coded signals, data associations in multitarget environment, neural training for decision making systems. Results of the analysis for proposed methods prove that a considerable increase in efficiency can be achieved when neural networks are used for radar information processing problems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115769864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Low power design using architecture and circuit level approaches 使用架构和电路级方法进行低功耗设计
Dong-Sun Kim, Jin-Tea Kim, Ki-Won Kwon, Duck-Jin Chung
{"title":"Low power design using architecture and circuit level approaches","authors":"Dong-Sun Kim, Jin-Tea Kim, Ki-Won Kwon, Duck-Jin Chung","doi":"10.1109/ICONIP.2002.1198150","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198150","url":null,"abstract":"The purpose of this paper is to propose the methodology of low-power circuit design in the aspect of the architecture and circuit level. Recently, more rapid computations are very important event in DSP, image processing and multi-purpose processor. So, it is very important to reduce power consumption in digital circuits and to maintain computational throughput. For this reason, the design experience and research in the early 1990s has demonstrated that doing so requires a \"power conscious\" design methodology that addresses dissipation at every level of the design hierarchy. Evidently, many pass transistor logic are proposed for reducing the power consumption and circuit size. In this paper, we introduce the methodologies for low-power using pass-transistor and SDD (Signal Dependency Diagram) technique for parallel and pipelined architecture.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116705906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Time constrain optimal method to find the minimum architectures for feedforward neural networks 前馈神经网络最小结构的时间约束优化方法
Teck-Sun Tan, G. Huang
{"title":"Time constrain optimal method to find the minimum architectures for feedforward neural networks","authors":"Teck-Sun Tan, G. Huang","doi":"10.1109/ICONIP.2002.1202189","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202189","url":null,"abstract":"Huang, et al. (1996, 2002) proposed architecture selection algorithm called SEDNN to find the minimum architectures for feedforward neural networks based on the Golden section search method and the upper bounds on the number of hidden neurons, as stated in Huang (2002) and Huang et al. (1998), to be 2/spl radic/((m + 2)N) or two layered feedforward network (TLFN) and N for single layer feedforward network (SLFN) where N is the number of training samples and m is the number of output neurons. The SEDNN algorithm worked well with the assumption that time allowed for the execution of the algorithm is infinite. This paper proposed an algorithm similar to the SEDNN, but with an added time factor to cater for applications that requires results within a specified period of time.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116916959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A quantized chaotic spiking neuron and CDMA coding 一种量化混沌尖峰神经元与CDMA编码
R. Furumachi, H. Torikai, T. Saito
{"title":"A quantized chaotic spiking neuron and CDMA coding","authors":"R. Furumachi, H. Torikai, T. Saito","doi":"10.1109/ICONIP.2002.1198119","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198119","url":null,"abstract":"Applying a higher frequency input to a chaotic spiking neuron, the state is quantized and the chaotic pulse-train is changed into various co-existing super-stable periodic pulse-trains (SSPTs). Using a quantized pulse-position map, the number of the SSPTs and their periods are clarified theoretically. Multiplex correlation characteristics for some set of the SSPTs is also clarified for application to CDMA communication systems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Focusing on soft-computing techniques to model the role of context in determining colours 专注于软计算技术来模拟环境在确定颜色中的作用
E.R. Denby
{"title":"Focusing on soft-computing techniques to model the role of context in determining colours","authors":"E.R. Denby","doi":"10.1109/ICONIP.2002.1198144","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198144","url":null,"abstract":"This paper describes an initial study to investigate the role of context in determining colours from a machine learning perspective. A soft-computing technique in the form of fuzzy neural networks is used to perform the intelligent processing of categorising colours given some training. The main hypothesis suggests that the neural network will not perform as well as a human familiar with the NCS colour space, because humans possess context knowledge needed to correctly classify any colour variety into eleven groupings. This paper describes the process taken to create the dataset suitable for the network, and reports on the use of the software called FuzzyCOPE 3/sup /spl copy// to investigate this hypothesis. Further, it points to issues such as what is context knowledge? Can the network's learning be said to possess contextual knowledge of the colour space?.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121244280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信