{"title":"Third-order generalization and a new approach to systematically categorizing higher-order generalization","authors":"R. Neville","doi":"10.1109/IJCNN.2005.1556174","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556174","url":null,"abstract":"Higher-order generalization is a means of categorizing different types of generalization. The paper presents a framework within which higher-order generalization can be evaluated in a detailed and systematic way. Previous research divided generalization into three categories. However, these categories were fuzzy and imprecise. This paper further refines existing definitions by first assigning each category a logical predicate that it must fulfil in order to achieve a specific order (type) of generalization. Then, it breaks the orders down into four different categories in a detailed and systematic way. The paper focuses on early (initial) results; some of the aims have been demonstrated and amplified through the experimental work.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132807640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Chen, De-shuang Huang, B. Wang, Yun-ping Zhu, Yixue Li
{"title":"Prediction of contact map integrated PNN with conformational energy","authors":"Peng Chen, De-shuang Huang, B. Wang, Yun-ping Zhu, Yixue Li","doi":"10.1109/IJCNN.2005.1555881","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1555881","url":null,"abstract":"This paper presents a novel method to solve the protein's three-dimensional structure prediction problem. It is a machine learning approach by integrating probabilistic neural network (PNN) with conformational energy function (CEF) based on chemico-physical knowledge of amino acids. In this method, firstly, the principal components are extracted from selected protein structures with lower sequence identity, and an initial matrix of contact map is constructed by K-L expansion. Secondly, PNN is used for predicting the long-range interaction of amino acids in protein. In particular, this method uses the CEF and chemico-physical characteristics of amino acids to run the PNN predictor. Consequently, it was found that our proposed method is better than existing methods, such as the hybrid method of HMMSTR and the correlated mutation analysis method. As a result, this method can accurately predict 31% of contacts at a distance cutoff of 8/spl Aring/ for proteins whose sequence length is up to 200.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133487270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement learning and the frame problem","authors":"R. Santiago, G. Lendaris","doi":"10.1109/IJCNN.2005.1556398","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556398","url":null,"abstract":"The frame problem, originally proposed within AI, has grown to be a fundamental stumbling block for building intelligent agents and modeling the mind. The source of the frame problem stems from the nature of symbolic processing. Unfortunately, connectionist approaches have long been criticized as having weaker representational capabilities than symbolic systems so have not been considered by many. The equivalence between the representational power of symbolic systems and connectionist architectures is redressed through neural manifolds, and reveals an associated frame problem. Working within the construct of neural manifolds, the frame problem is solved through the use of contextual reinforcement learning, a new paradigm recently proposed.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133990602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolutionary neural classification for evaluation of retail stores and decision support","authors":"R. Stahlbock, S. Crone","doi":"10.1109/IJCNN.2005.1556098","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556098","url":null,"abstract":"The neural network paradigm of learning vector quantization (LVQ) and several enhancements of the standard algorithms have demonstrated improved predictive accuracy when applied to simple 'toy' problems. In this paper, we propose a novel approach of evolutionary optimized LVQ classification applied in real world business decision support. We predict the success of retail outlets of a multinational German company in terms of revenue and profit. The predictions are used to support investment decisions, establishing new stores or closing down existing ones with limited prospective profits. In addition, the predictions provide information to change in-store design or product lines of existing stores. The LVQ networks are trained on data reflecting the macroscopic socio-demographic infrastructure and microscopic in-store aspects of existing outlets. Results of numerous computational experiments in a parallelized PC network are compared with standard neural networks, demonstrating pre-eminent results of the novel method.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134038376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sparse Bayesian learning and the relevance multi-layer perceptron network","authors":"G. Cawley, N. L. C. Talbot","doi":"10.1109/IJCNN.2005.1556045","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556045","url":null,"abstract":"We introduce a simple framework for sparse Bayesian learning with multi-layer perceptron (IMLP) networks, inspired by Tipping's relevance vector machine (RVM). Like the RVM, a Bayesian prior is adopted that includes separate hyperparameters for each weight, allowing redundant weights and hidden layer units to be identified and subsequently pruned from the network, whilst also providing a means to avoid over-fitting the training data. This approach is also more easily implemented, as only the diagonal elements of the Hessian matrix are used in the update formula for the regularisation parameters, rather than the traces of square sub-matrices of the Hessian corresponding to the weights associated with each regularisation parameter. The proposed relevance multi-layer perceptron (RMLP) is evaluated over several publicly available benchmark datasets, demonstrating the viability of the approach, giving rise to similar generalisation performance, but with far fewer weights.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131861143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Faithful retinotopic maps with local optimum rules, axonal competition, and Hebbian learning","authors":"J.-P. Thivierget, E. Balaban","doi":"10.1109/IJCNN.2005.1556362","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556362","url":null,"abstract":"Innervation of the visual midbrain by axons from the retina can be described as a stochastic mapping process that maintains topography and polarity between the two regions. Previous work has identified a number of mechanisms that insure proper guidance of the axons. In the current report, we combine three of these mechanisms, servomechanical guidance with local optimum rules, axonal competition, and Hebbian plasticity. Although each of these separate processes are stochastic and therefore subject to imprecision, their combination guides growth cones to precise termination points.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"28 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134197170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic feature fusion in the self organising tree map-applied to the segmentation of biofilm images","authors":"M. Kyan, L. Guan, S. Liss","doi":"10.1109/IJCNN.2005.1556285","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556285","url":null,"abstract":"The self organising tree map (SOTM) neural network is investigated as a means of segmenting microorganisms from confocal microscope image data. Features describing pixel & regional intensities, phase congruency and spatial proximity are explored in terms of their impact on the segmentation of bacteria and other micro-organisms. The significance of individual features is investigated, and it is proposed that, within the context of micro-biological image segmentation, better object delineation can be achieved if certain features dominate the initial stages of learning. In this way, other features are allowed to become more/less significant as learning progresses: as the network gains more knowledge about the data being segmented. The efficiency and flexibility of the SOTM in adapting to, and preserving the topology of input space, makes it an appropriate candidate for implementing this idea. Preliminary experiments are presented and it is found that favouring intensity characteristics in the early phases of learning, whilst relaxing proximity constraints in later phases of learning, offers a general mechanism through which we can improve the segmentation of microbial constituents","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"717 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134557925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discriminative training of hidden Markov models by multiobjective optimization for visual speech recognition","authors":"Jong-Seok Lee, C. Park","doi":"10.1109/IJCNN.2005.1556216","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1556216","url":null,"abstract":"This paper proposes a novel discriminative training algorithm of hidden Markov models (HMMs) based on the multiobjective optimization for visual speech recognition. We develop a new criterion composed of two minimization objectives for training HMMs discriminatively and a global multiobjective optimization algorithm based on the simulated annealing algorithm to find the Pareto solutions of the optimization problem. We demonstrate the effectiveness of the proposed method via an isolated digit recognition experiment. The results show that the proposed method is superior to the conventional maximum likelihood estimation and the popular discriminative training algorithms.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133135917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A recurrent RBF network model for nearest neighbor classification","authors":"M. K. Muezzinoglu, J.M. Zuracla","doi":"10.1109/IJCNN.2005.1555854","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1555854","url":null,"abstract":"Superposition of radial basis functions centered at given prototype patterns constitutes one of the most suitable energy forms for gradient systems that perform nearest neighbor classification with real-valued static prototypes. It is shown in this paper that a continuous-time dynamical neural network model, employing a radial basis function and a sigmoid multilayer perceptron sub-networks, is capable of maximizing such an energy form locally, thus performing almost perfectly nearest neighbor classification, when initiated by a distorted pattern. The dynamical classification scheme implemented by the network eliminates all comparisons, which are the vital steps of the conventional nearest neighbor classification process. The performance of the proposed network model is demonstrated on image reconstruction applications.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133793367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed computation for neural-based abductive reasoning","authors":"L. Romdhane, M. Elhadef","doi":"10.1109/IJCNN.2005.1555960","DOIUrl":"https://doi.org/10.1109/IJCNN.2005.1555960","url":null,"abstract":"This work extends a recent model for neural-based abductive reasoning to account for the monotonic class. A problem is said to be monotonic some causes, together, explain the same effect. For this, we developed a new computational principle, called the softmin, and implemented it within a neural architecture. Simulation results are very satisfactory and should stimulate future research.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115520467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}