{"title":"使用多项式模型的多元逼近方法:比较研究","authors":"I. López-Peña, Ángel Fernando Kuri Morales","doi":"10.1109/MICAI.2015.26","DOIUrl":null,"url":null,"abstract":"A frequent problem in artificial intelligence is the one associated with the so-called supervised learning: the need to find an expression of a dependent variable as a function of several independent ones. There are several algorithms that allow us to find a solution to the bivariate problems. However, the true challenge arises when the number of independent variables is large. Relatively new tools have been developed to tackle this kind of problems. Thus, multi-Layer Perceptron networks (MLPs) may be seen as multivariate approximation algorithms. However, a commonly cited disadvantage of MLPs is that they remain a \"black-box\" kind of method: they do not yield an explicit closed expression to the solution. Rather, we are left with the need of expressing it via the architecture of the MLP and the value of the trained connections. In this paper we explore three methods that allow us to express the solution to multivariate problems in a closed form: a) Fast Ascent (FA), b) Levenberg-Marquardt (LM) and c) Powell's Dog-Leg (PM) algorithms. These yield closed expressions when presented with multiple independent variable problems. In this paper we discuss and compare these four methods and their possible application to pattern recognition in mobile robot environments and artificial intelligence in general.","PeriodicalId":448255,"journal":{"name":"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Multivariate Approximation Methods Using Polynomial Models: A Comparative Study\",\"authors\":\"I. López-Peña, Ángel Fernando Kuri Morales\",\"doi\":\"10.1109/MICAI.2015.26\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A frequent problem in artificial intelligence is the one associated with the so-called supervised learning: the need to find an expression of a dependent variable as a function of several independent ones. There are several algorithms that allow us to find a solution to the bivariate problems. However, the true challenge arises when the number of independent variables is large. Relatively new tools have been developed to tackle this kind of problems. Thus, multi-Layer Perceptron networks (MLPs) may be seen as multivariate approximation algorithms. However, a commonly cited disadvantage of MLPs is that they remain a \\\"black-box\\\" kind of method: they do not yield an explicit closed expression to the solution. Rather, we are left with the need of expressing it via the architecture of the MLP and the value of the trained connections. In this paper we explore three methods that allow us to express the solution to multivariate problems in a closed form: a) Fast Ascent (FA), b) Levenberg-Marquardt (LM) and c) Powell's Dog-Leg (PM) algorithms. These yield closed expressions when presented with multiple independent variable problems. In this paper we discuss and compare these four methods and their possible application to pattern recognition in mobile robot environments and artificial intelligence in general.\",\"PeriodicalId\":448255,\"journal\":{\"name\":\"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MICAI.2015.26\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MICAI.2015.26","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
人工智能中一个常见的问题是与所谓的监督学习相关的问题:需要找到一个因变量作为几个独立变量的函数的表达式。有几种算法可以让我们找到二元问题的解。然而,当自变量的数量很大时,真正的挑战就出现了。相对较新的工具已经被开发出来解决这类问题。因此,多层感知器网络(mlp)可以看作是多变量近似算法。然而,mlp的一个常见缺点是它们仍然是一种“黑盒”方法:它们不产生解决方案的显式封闭表达式。相反,我们需要通过MLP的架构和训练的连接的价值来表达它。在本文中,我们探讨了三种方法,使我们能够以封闭形式表示多元问题的解:a)快速上升(FA), b) Levenberg-Marquardt (LM)和c)鲍威尔的狗腿(PM)算法。当出现多个自变量问题时,这些表达式产生封闭表达式。在本文中,我们讨论和比较了这四种方法及其在移动机器人环境和人工智能中模式识别的可能应用。
Multivariate Approximation Methods Using Polynomial Models: A Comparative Study
A frequent problem in artificial intelligence is the one associated with the so-called supervised learning: the need to find an expression of a dependent variable as a function of several independent ones. There are several algorithms that allow us to find a solution to the bivariate problems. However, the true challenge arises when the number of independent variables is large. Relatively new tools have been developed to tackle this kind of problems. Thus, multi-Layer Perceptron networks (MLPs) may be seen as multivariate approximation algorithms. However, a commonly cited disadvantage of MLPs is that they remain a "black-box" kind of method: they do not yield an explicit closed expression to the solution. Rather, we are left with the need of expressing it via the architecture of the MLP and the value of the trained connections. In this paper we explore three methods that allow us to express the solution to multivariate problems in a closed form: a) Fast Ascent (FA), b) Levenberg-Marquardt (LM) and c) Powell's Dog-Leg (PM) algorithms. These yield closed expressions when presented with multiple independent variable problems. In this paper we discuss and compare these four methods and their possible application to pattern recognition in mobile robot environments and artificial intelligence in general.