Adv. Artif. Neural Syst.最新文献

筛选
英文 中文
Dynamical Behavior in a Four-Dimensional Neural Network Model with Delay 具有时滞的四维神经网络模型的动力学行为
Adv. Artif. Neural Syst. Pub Date : 2012-01-01 DOI: 10.1155/2012/397146
Changjin Xu, Peiluan Li
{"title":"Dynamical Behavior in a Four-Dimensional Neural Network Model with Delay","authors":"Changjin Xu, Peiluan Li","doi":"10.1155/2012/397146","DOIUrl":"https://doi.org/10.1155/2012/397146","url":null,"abstract":"A four-dimensional neural network model with delay is investigated. With the help of the theory of delay differential equation and Hopf bifurcation, the conditions of the equilibrium undergoing Hopf bifurcation are worked out by choosing the delay as parameter. Applying the normal form theory and the center manifold argument, we derive the explicit formulae for determining the properties of the bifurcating periodic solutions. Numerical simulations are performed to illustrate the analytical results.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"29 1","pages":"397146:1-397146:11"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84743224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Validation, Bootstrap, and Support Vector Machines 交叉验证,引导和支持向量机
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/302572
M. Tsujitani, Yusuke Tanaka
{"title":"Cross-Validation, Bootstrap, and Support Vector Machines","authors":"M. Tsujitani, Yusuke Tanaka","doi":"10.1155/2011/302572","DOIUrl":"https://doi.org/10.1155/2011/302572","url":null,"abstract":"This paper considers the applications of resampling methods to support vector machines (SVMs). We take into account the leaving-one-out cross-validation (CV) when determining the optimum tuning parameters and bootstrapping the deviance in order to summarize the measure of goodness-of-fit in SVMs. The leaving-one-out CV is also adapted in order to provide estimates of the bias of the excess error in a prediction rule constructed with training samples. We analyze the data from a mackerel-egg survey and a liver-disease study.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"63 1","pages":"302572:1-302572:6"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73200415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
On the Global Dissipativity of a Class of Cellular Neural Networks with Multipantograph Delays 一类具有多受电弓延迟的细胞神经网络的全局耗散
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/941426
Liqun Zhou
{"title":"On the Global Dissipativity of a Class of Cellular Neural Networks with Multipantograph Delays","authors":"Liqun Zhou","doi":"10.1155/2011/941426","DOIUrl":"https://doi.org/10.1155/2011/941426","url":null,"abstract":"For the first time the global dissipativity of a class of cellular neural networks with multipantograph delays is studied. On the one hand, some delay-dependent sufficient conditions are obtained by directly constructing suitable Lyapunov functionals; on the other hand, firstly the transformation transforms the cellular neural networks with multipantograph delays into the cellular neural networks with constant delays and variable coefficients, and then constructing Lyapunov functionals, some delay-independent sufficient conditions are given. These new sufficient conditions can ensure global dissipativity together with their sets of attraction and can be applied to design global dissipative cellular neural networks with multipantograph delays and easily checked in practice by simple algebraic methods. An example is given to illustrate the correctness of the results.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"2011 1","pages":"941426:1-941426:7"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73763668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Simplified Natural Gradient Learning Algorithm 一个简化的自然梯度学习算法
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/407497
Michael R. Bastian, J. Gunther, T. Moon
{"title":"A Simplified Natural Gradient Learning Algorithm","authors":"Michael R. Bastian, J. Gunther, T. Moon","doi":"10.1155/2011/407497","DOIUrl":"https://doi.org/10.1155/2011/407497","url":null,"abstract":"Adaptive natural gradient learning avoids singularities in the parameter space of multilayer perceptrons. However, it requires a larger number of additional parameters than ordinary backpropagation in the form of the Fisher information matrix. This article describes a new approach to natural gradient learning that uses a smaller Fisher information matrix. It also uses a prior distribution on the neural network parameters and an annealed learning rate. While this new approach is computationally simpler, its performance is comparable to that of Adaptive natural gradient learning.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"44 1","pages":"407497:1-407497:9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87236480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
The Generalized Dahlquist Constant with Applications in Synchronization Analysis of Typical Neural Networks via General Intermittent Control 广义Dahlquist常数及其在典型神经网络一般间歇控制同步分析中的应用
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/249136
Zhang Qunli
{"title":"The Generalized Dahlquist Constant with Applications in Synchronization Analysis of Typical Neural Networks via General Intermittent Control","authors":"Zhang Qunli","doi":"10.1155/2011/249136","DOIUrl":"https://doi.org/10.1155/2011/249136","url":null,"abstract":"A novel and effective approach to synchronization analysis of neural networks is investigated by using the nonlinear operator named the generalized Dahlquist constant and the general intermittent control. The proposed approach offers a design procedure for synchronization of a large class of neural networks. The numerical simulations whose theoretical results are applied to typical neural networks with and without delayed item demonstrate the effectiveness and feasibility of the proposed technique.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"15 1","pages":"249136:1-249136:7"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84105509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Applying Artificial Neural Networks for Face Recognition 应用人工神经网络进行人脸识别
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/673016
T. Le
{"title":"Applying Artificial Neural Networks for Face Recognition","authors":"T. Le","doi":"10.1155/2011/673016","DOIUrl":"https://doi.org/10.1155/2011/673016","url":null,"abstract":"This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN) to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"11 1","pages":"673016:1-673016:16"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75936356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Soft Topographic Maps for Clustering and Classifying Bacteria Using Housekeeping Genes 利用管家基因进行细菌聚类和分类的软地形图
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/617427
M. L. Rosa, R. Rizzo, A. Urso
{"title":"Soft Topographic Maps for Clustering and Classifying Bacteria Using Housekeeping Genes","authors":"M. L. Rosa, R. Rizzo, A. Urso","doi":"10.1155/2011/617427","DOIUrl":"https://doi.org/10.1155/2011/617427","url":null,"abstract":"The Self-Organizing Map (SOM) algorithm is widely used for building topographic maps of data represented in a vectorial space, but it does not operate with dissimilarity data. Soft Topographic Map (STM) algorithm is an extension of SOM to arbitrary distance measures, and it creates a map using a set of units, organized in a rectangular lattice, defining data neighbourhood relationships. In the last years, a new standard for identifying bacteria using genotypic information began to be developed. In this new approach, phylogenetic relationships of bacteria could be determined by comparing a stable part of the bacteria genetic code, the so-called \"housekeeping genes.\" The goal of this work is to build a topographic representation of bacteria clusters, by means of self-organizing maps, starting from genotypic features regarding housekeeping genes.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"177 1","pages":"617427:1-617427:8"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73120611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multilayer Perceptron for Prediction of 2006 World Cup Football Game 2006年世界杯足球赛预测的多层感知器
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/374816
Kou-Yuan Huang, Kai-Ju Chen
{"title":"Multilayer Perceptron for Prediction of 2006 World Cup Football Game","authors":"Kou-Yuan Huang, Kai-Ju Chen","doi":"10.1155/2011/374816","DOIUrl":"https://doi.org/10.1155/2011/374816","url":null,"abstract":"Multilayer perceptron (MLP) with back-propagation learning rule is adopted to predict the winning rates of two teams according to their official statistical data of 2006World Cup Football Game at the previous stages. There are training samples fromthree classes: win, draw, and loss. At the new stage, new training samples are selected from the previous stages and are added to the training samples, then we retrain the neural network. It is a type of on-line learning. The 8 features are selected with ad hoc choice.We use the theorem ofMirchandani and Cao to determine the number of hidden nodes. And after the testing in the learning convergence, the MLP is determined as 8-2-3 model. The learning rate and momentum coefficient are determined in the cross-learning. The prediction accuracy achieves 75% if the draw games are excluded.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"31 1","pages":"374816:1-374816:8"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83345554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
An Optimal Implementation on FPGA of a Hopfield Neural Network Hopfield神经网络的FPGA优化实现
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/189368
W. Mansour, R. Ayoubi, H. Ziade, R. Velazco, W. Falou
{"title":"An Optimal Implementation on FPGA of a Hopfield Neural Network","authors":"W. Mansour, R. Ayoubi, H. Ziade, R. Velazco, W. Falou","doi":"10.1155/2011/189368","DOIUrl":"https://doi.org/10.1155/2011/189368","url":null,"abstract":"The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN) that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network (HNN) parallel architecture on a SRAM-based FPGA. Themain advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1) multiplications and O(log N) additions, whereas most others require O(N) multiplications and O(N) additions.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"2 1","pages":"189368:1-189368:9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81346083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Early FDI Based on Residuals Design According to the Analysis of Models of Faults: Application to DAMADICS 基于故障模型分析的残差设计早期FDI &在DAMADICS中的应用
Adv. Artif. Neural Syst. Pub Date : 2011-01-01 DOI: 10.1155/2011/453169
Y. Kourd, D. Lefebvre, N. Guersi
{"title":"Early FDI Based on Residuals Design According to the Analysis of Models of Faults: Application to DAMADICS","authors":"Y. Kourd, D. Lefebvre, N. Guersi","doi":"10.1155/2011/453169","DOIUrl":"https://doi.org/10.1155/2011/453169","url":null,"abstract":"The increased complexity of plants and the development of sophisticated control systems have encouraged the parallel development of efficient rapid fault detection and isolation (FDI) systems. FDI in industrial system has lately become of great significance. This paper proposes a new technique for short time fault detection and diagnosis in nonlinear dynamic systems with multi inputs and multi outputs. The main contribution of this paper is to develop a FDI schema according to reference models of fault-free and faulty behaviors designed with neural networks. Fault detection is obtained according to residuals that result from the comparison of measured signals with the outputs of the fault free reference model. Then, Euclidean distance from the outputs of models of faults to themeasurements leads to fault isolation. The advantage of this method is to provide not only early detection but also early diagnosis thanks to the parallel computation of the models of faults and to the proposed decision algorithm. The effectiveness of this approach is illustrated with simulations on DAMADICS benchmark.","PeriodicalId":7288,"journal":{"name":"Adv. Artif. Neural Syst.","volume":"48 1","pages":"453169:1-453169:10"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83108401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信