{"title":"Continuous Learning Method for a Continuous Dynamical Control in a Partially Observable Universe","authors":"F. Dambreville","doi":"10.1109/ICIF.2006.301623","DOIUrl":null,"url":null,"abstract":"In this paper, we are interested in the optimal dynamical control of sensors based on partial and noisy observations. These problems are related to the POMDP family. In this case however, we are manipulating continuous-valued controls and continuous-valued decisions. While the dynamical programming method will rely on a discretization of the problem, we are dealing here directly with the continuous data. Moreover, our purpose is to address the full past observation range. Our approach is to modelize the POMDP strategies by means of dynamic Bayesian networks. A method, based on the cross-entropy is implemented for optimizing the parameters of such DBN, relatively to the POMDP problem. In this particular work, the dynamic Bayesian networks are built from semi-continuous probabilistic laws, so as to ensure the manipulation of continuous data","PeriodicalId":248061,"journal":{"name":"2006 9th International Conference on Information Fusion","volume":"41 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 9th International Conference on Information Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIF.2006.301623","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we are interested in the optimal dynamical control of sensors based on partial and noisy observations. These problems are related to the POMDP family. In this case however, we are manipulating continuous-valued controls and continuous-valued decisions. While the dynamical programming method will rely on a discretization of the problem, we are dealing here directly with the continuous data. Moreover, our purpose is to address the full past observation range. Our approach is to modelize the POMDP strategies by means of dynamic Bayesian networks. A method, based on the cross-entropy is implemented for optimizing the parameters of such DBN, relatively to the POMDP problem. In this particular work, the dynamic Bayesian networks are built from semi-continuous probabilistic laws, so as to ensure the manipulation of continuous data