{"title":"Applications","authors":"Geon Dae Moon","doi":"10.1002/9781119508557.ch19","DOIUrl":null,"url":null,"abstract":": I will describe some examples in genomics and in neuroscience where one needs to find different fully data-driven adaptive statistical methods for testing or for estimation. In all those examples, the key is to find concentration inequalities which either allow to very precisely calibrate the tuning parameter or to deeply understand how quantiles of test statistics are behaving. The main concentration tools are either \"Talagrand\" type inequalities for counting processes, Bernstein or Rosenthal type inequalities, or U-statistics - chaos inequalities. Abstract : In this lecture we present some new concentration inequalities for Feynman-Kac particle processes. We analyze different types of stochastic particle models, including particle profile occupation measures, genealogical tree based evolution models, particle free energies, as well as backward Markov chain particle models. We illustrate these results with a series of topics related to computational physics and biology, stochastic optimization, signal processing and Bayesian statistics, and many other probabilistic machine learning algorithms. Special emphasis is given to the stochastic modeling, and to the quantitative performance analysis of a series of advanced Monte Carlo methods, including particle filters, genetic type island models, Markov bridge models, and interacting particle Markov chain Monte Carlo methodologies. Abstract : The mathematical foundations of statistical learning theory heavily relies on concentration inequalities and empirical processes techniques. Learning an order relation over a Banach space involves performance measures which have higher order statistics, such as rank statistics, as empirical counterparts. The classical questions of consistency, universal and fast rates of convergence require dedicated tools which involve projection arguments and concentration inequalities for U- and R-processes. In the talk, we will present some results and open problems motivated by statistical problems of major interest. : in high Abstract : I will describe some results about concentration of volume of high dimensional convex bodies obtained in the last decade. Central limit theorem for convex bodies is one of the main achievement of these series of work. I will also present some open problems, like the thin shell conjecture and the problem of spectral gap, a conjecture due to Kannan Lovasz and Simonovits. Extension of these results to new classes of probability measures, like Cauchy measure or more generally κ -concave measures will be discussed. Abstract : Compressed sensing is an area of information theory where one seeks to recover an unknown signal from few measurements. A signal is often modeled as a vector in R n , and linear measurements are given as y = Ax where A is an m by n matrix. Best known results of compressed sensing are for random linear measurements, thus A is a random matrix. We will learn about some probabilistic successes and challenges in this area, with many connections to sampling theory, random matrix theory, and stochastic geometry. Abstract: I will provide a survey of recent results concerning probabilistic ap-proximations, obtained via the use of the Malliavin calculus of variations and the Stein and Chen-Stein methods. One advantage of this approach is that upper bounds are often expressed in terms of the variance of some random variable, so that well-known estimates (like e.g. given the Poincaré inequality and its general-izations) can be directly applied. I will also provide an overview of applications, ranging from fractional processes to random fields on homogeneous spaces, and from density estimates to geometric random graphs. Abstract : In the recent years the multi-armed bandit problem has attracted a lot of attention in the theoretical learning community. This growing interest is a con-sequence of the large number of problems that can be modelized as a multi-armed bandit: web advertisement, dynamic pricing, online optimization, ect. Bandits algorithms are also used as building blocks in more complicated scenarios such as reinforcement learning, model selection problems, or games. In this talk I will focus on the so-called adversarial model for multi-armed bandits. I will show an algorithm that solves a long-standing open problem regarding the minimax rate for this framework. I will also discuss the recent extension of this algorithm to bandits with a very large, but structured, set of arms (such as paths on a graph). Abstract : Dans cet exposé, nous effectuerons un rapide survol de l’étude probabiliste des grands graphes planaires aléatoires. Né au début des années 2000, motivé par des applications en physique théorique, combinatoire et géométrie, ce champ de recherche s’est beaucoup développé depuis. L’objectif principal est de comprendre la structure à grande échelle de graphes (ou cartes) planaires uni-formes lorsque la taille tend vers l’infini. L’année dernière, Le Gall et Miermont ont montré qu’à la limite, une surface aléatoire (la carte brownienne) apparaît","PeriodicalId":186753,"journal":{"name":"Diarylethene Molecular Photoswitches","volume":"168 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diarylethene Molecular Photoswitches","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/9781119508557.ch19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
: I will describe some examples in genomics and in neuroscience where one needs to find different fully data-driven adaptive statistical methods for testing or for estimation. In all those examples, the key is to find concentration inequalities which either allow to very precisely calibrate the tuning parameter or to deeply understand how quantiles of test statistics are behaving. The main concentration tools are either "Talagrand" type inequalities for counting processes, Bernstein or Rosenthal type inequalities, or U-statistics - chaos inequalities. Abstract : In this lecture we present some new concentration inequalities for Feynman-Kac particle processes. We analyze different types of stochastic particle models, including particle profile occupation measures, genealogical tree based evolution models, particle free energies, as well as backward Markov chain particle models. We illustrate these results with a series of topics related to computational physics and biology, stochastic optimization, signal processing and Bayesian statistics, and many other probabilistic machine learning algorithms. Special emphasis is given to the stochastic modeling, and to the quantitative performance analysis of a series of advanced Monte Carlo methods, including particle filters, genetic type island models, Markov bridge models, and interacting particle Markov chain Monte Carlo methodologies. Abstract : The mathematical foundations of statistical learning theory heavily relies on concentration inequalities and empirical processes techniques. Learning an order relation over a Banach space involves performance measures which have higher order statistics, such as rank statistics, as empirical counterparts. The classical questions of consistency, universal and fast rates of convergence require dedicated tools which involve projection arguments and concentration inequalities for U- and R-processes. In the talk, we will present some results and open problems motivated by statistical problems of major interest. : in high Abstract : I will describe some results about concentration of volume of high dimensional convex bodies obtained in the last decade. Central limit theorem for convex bodies is one of the main achievement of these series of work. I will also present some open problems, like the thin shell conjecture and the problem of spectral gap, a conjecture due to Kannan Lovasz and Simonovits. Extension of these results to new classes of probability measures, like Cauchy measure or more generally κ -concave measures will be discussed. Abstract : Compressed sensing is an area of information theory where one seeks to recover an unknown signal from few measurements. A signal is often modeled as a vector in R n , and linear measurements are given as y = Ax where A is an m by n matrix. Best known results of compressed sensing are for random linear measurements, thus A is a random matrix. We will learn about some probabilistic successes and challenges in this area, with many connections to sampling theory, random matrix theory, and stochastic geometry. Abstract: I will provide a survey of recent results concerning probabilistic ap-proximations, obtained via the use of the Malliavin calculus of variations and the Stein and Chen-Stein methods. One advantage of this approach is that upper bounds are often expressed in terms of the variance of some random variable, so that well-known estimates (like e.g. given the Poincaré inequality and its general-izations) can be directly applied. I will also provide an overview of applications, ranging from fractional processes to random fields on homogeneous spaces, and from density estimates to geometric random graphs. Abstract : In the recent years the multi-armed bandit problem has attracted a lot of attention in the theoretical learning community. This growing interest is a con-sequence of the large number of problems that can be modelized as a multi-armed bandit: web advertisement, dynamic pricing, online optimization, ect. Bandits algorithms are also used as building blocks in more complicated scenarios such as reinforcement learning, model selection problems, or games. In this talk I will focus on the so-called adversarial model for multi-armed bandits. I will show an algorithm that solves a long-standing open problem regarding the minimax rate for this framework. I will also discuss the recent extension of this algorithm to bandits with a very large, but structured, set of arms (such as paths on a graph). Abstract : Dans cet exposé, nous effectuerons un rapide survol de l’étude probabiliste des grands graphes planaires aléatoires. Né au début des années 2000, motivé par des applications en physique théorique, combinatoire et géométrie, ce champ de recherche s’est beaucoup développé depuis. L’objectif principal est de comprendre la structure à grande échelle de graphes (ou cartes) planaires uni-formes lorsque la taille tend vers l’infini. L’année dernière, Le Gall et Miermont ont montré qu’à la limite, une surface aléatoire (la carte brownienne) apparaît