{"title":"On-line Thai handwritten character recognition using hidden Markov model and fuzzy logic","authors":"R. Budsayaplakorn, W. Asdornwised, S. Jitapunkul","doi":"10.1109/NNSP.2003.1318053","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318053","url":null,"abstract":"This paper presents a new on-line recognition of Thai handwritten characters. Active researches in Thai handwritten character recognition are converged into two distinct methods, HMM and fuzzy logic classifier. The former showed poor recognition rate due to Thai fuzzy characters. The shortcoming of the latter is on difficulties in establishing the set of rules to cover a whole handwriting styles. Our method is proposed to exploit the better of two worlds (HMM and distinctive feature based fuzzy classifier). The experimental result was shown an average recognition rate is improved from 89.1%(using HMM) to 91.2 using our proposed method.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124653424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variational expectation-maximization training for Gaussian networks","authors":"N. Nasios, A. Bors","doi":"10.1109/NNSP.2003.1318033","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318033","url":null,"abstract":"This paper introduces variational expectation-maximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEM-based learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum log-likelihood estimators are considered for all the parameter distributions involved.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133657732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subspaces of text discrimination with application to biological literature","authors":"Mahesan Suwannaroj, M. Niranjan","doi":"10.1109/NNSP.2003.1317999","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1317999","url":null,"abstract":"This paper is about the application of statistical pattern recognition techniques to the classification of text with the objective of retrieving documents relevant for the construction of gene networks. We start from the usual practice of representing a document, electronically available abstracts of scientific papers in this case, as a high dimensional vector of term of occurrences. We consider the problem of retrieving documents corresponding to the metabolic pathway of the organism yeast, Saccharomyces Cerevisiae, using a trained classifier as filter. We use support vector machines (SVMs) as classifiers and compare techniques for reducing the dimensionality of the problem: latent semantic kernels (LSK) and sequential forward selection (SFS). In order to deal with the issue of having only a small set of accurately labelled documents, we used the approach of transductive inference. In this case, LSK leads to a subspace formed as a linear combination of features (terms in the lexicon) while SFS selects a subset of the dimension. We find, for this problem, that the discriminant information appears to lie in a subspace, which is very small in dimensionality compared to that of the original formulation. By matching against the gene ontology (GO) database, we further find that the selection process (SFS) picks out the discriminant terms that are of biological significance for this problem.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125181206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information cut and information forces for clustering","authors":"R. Jenssen, J. Príncipe, T. Eltoft","doi":"10.1109/NNSP.2003.1318045","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318045","url":null,"abstract":"We define an information-theoretic divergence measure between probability density functions (pdfs) that has a deep connection to the cut in graph-theory. This connection is revealed when the pdfs are estimated by the Parzen method with a Gaussian kernel. We refer to our divergence measure as the information cut. The information cut provides us with a theoretically sound criterion for cluster evaluation. In this paper we show that it can be used to merge clusters. The initial clusters are obtained based on the related concept of information forces. We create directed trees by selecting the predecessor of a node (pattern) according to the direction of the information force acting on the pattern. Each directed tree corresponds to a cluster, hence enabling us to obtain an initial partitioning of the data set. Subsequently, we utilize the information cut as a cluster evaluation function to merge clusters until the predefined number of clusters is reached. We demonstrate the performance of our novel information-theoretic clustering method when applied to both artificially created data and real data, with encouraging results.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"11 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126319684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Loopy belief propagation and probabilistic image processing","authors":"Kazuyuki Tanaka, Jun-ichi Inoue, D. Titterington","doi":"10.1109/NNSP.2003.1318032","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318032","url":null,"abstract":"Estimation of hyperparameters by maximization of the marginal likelihood in probabilistic image processing is investigated by using the cluster variation method. The algorithms are substantially equivalent to generalized loopy belief propagation.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125967696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Arenas-García, V. Gómez-Verdejo, M. Martínez‐Ramón, A. Figueiras-Vidal
{"title":"Separate-variable adaptive combination of LMS adaptive filters for plant identification","authors":"J. Arenas-García, V. Gómez-Verdejo, M. Martínez‐Ramón, A. Figueiras-Vidal","doi":"10.1109/NNSP.2003.1318023","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318023","url":null,"abstract":"The Least Mean Square (LMS) algorithm has become a very popular algorithm for adaptive filtering due to its robustness and simplicity. An adaptive convex combination of one fast a one slow LMS filters has been previously proposed for plant identification, as a way to break the speed vs precision compromise inherent to LMS filters. In this paper, an improved version of this combination method is presented. Instead of using a global mixing parameter, the new algorithm uses a different combination parameter for each weight of the adaptive filter, what gives some advantage when identifying varying plants where some of the coefficients remain unaltered, or when the input process is colored. Some simulation examples show the validity of this approach when compared with the one-parameter combination scheme and with a different multi-step approach.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"28 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132118552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An effective reject rule for reliability improvement in bank note neuro-classifiers","authors":"A. Ahmadi, S. Omatu, T. Kosaka","doi":"10.1109/NNSP.2003.1318050","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318050","url":null,"abstract":"In this paper the reliability of bank note neuro-classifiers is investigated and a reject rule is proposed on the basis of probability density function of the input data. The reliability of classification is evaluated through two parameters, which are associated with the winning class probability and the second maximal probability. Then a threshold value is considered to reject the unreliable classifications. As for modeling the non-linear correlation among the data variables and extracting the features, a local principal components analysis (PCA) is applied. The method is tested with a learning vector quantization (LVQ) classifier using 3,600 data samples of various bills of US dollar. The results show that by taking a suitable reject threshold value and also a proper number of regions for the local PCA, the reliability of the system can be improved significantly.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115815581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Fariselli, A. Zauli, I. Rossi, M. Finelli, P. Martelli, R. Casadio
{"title":"A neural network method to improve prediction of protein-protein interaction sites in heterocomplexes","authors":"P. Fariselli, A. Zauli, I. Rossi, M. Finelli, P. Martelli, R. Casadio","doi":"10.1109/NNSP.2003.1318002","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318002","url":null,"abstract":"In this paper we describe an algorithm, based on neural networks that adds to the previously published results (ISPRED, www.biocomp.unibo.it) and increases the predictive performance of protein-protein interaction sites in protein structures. The goal is to reduce the number of spurious assignment and developing knowledge based computational approach to focus on clusters of predicted residues on the protein surface. The algorithm is based on neural networks and can be used to highlight putative interacting patches with high reliability, as indicated when tested on known complexes in the PDB. When a smoothing algorithm correlates the network outputs, the accuracy in identifying the interaction patches increases from 73% up 76%. The reliability of the prediction is also increased by the application the smoothing procedure.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114237138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A variational approach to robust Bayesian interpolation","authors":"Michael E. Tipping, Neil D. Lawrence","doi":"10.1109/NNSP.2003.1318022","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318022","url":null,"abstract":"We detail a Bayesian interpolation procedure for linear-in-the-parameter models, which combines both effective complexity control and robustness to outliers. Robustness is obtained by adopting a student-t noise distribution, defined hierarchically in terms of an inverse-gamma prior distribution over individual Gaussian observation variances. Importantly, this hierarchical definition enables practical Bayesian variational techniques to concurrently determine both the primary model parameters and the form of the noise process. We show that the model is capable of flexibly inferring, from limited data, both Gaussian and more heavily-tailed student-t noise processes as appropriate.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116230546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An on-line algorithm for blind source extraction based on nonlinear prediction approach","authors":"D. Mandic, A. Cichocki, U. Manmontri","doi":"10.1109/NNSP.2003.1318042","DOIUrl":"https://doi.org/10.1109/NNSP.2003.1318042","url":null,"abstract":"A gradient descent based on-line algorithm for blind source extraction (BSE) of instantaneous signal mixtures is proposed. This algorithm is derived by utilising a nonlinear adaptive filter in a structure that consists of an extraction and prediction module. By exploiting the predictability property of a signal from the mixture, source signals are extracted based on the order of the nonlinear adaptive predictor. To improve the convergence of the basic algorithm, it is further globally normalised based on the minimisation of the a posteriori prediction error. Next, the algorithm is made fully adaptive to compensate for the independence and other assumptions in its derivation. Two examples are presented to illustrate the performance of the algorithms.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}