Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark W. Barrett, Mykel J. Kochenderfer
{"title":"Algorithms for Verifying Deep Neural Networks","authors":"Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark W. Barrett, Mykel J. Kochenderfer","doi":"10.1561/2400000035","DOIUrl":"https://doi.org/10.1561/2400000035","url":null,"abstract":"Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We discuss fundamental differences and connections between existing algorithms. In addition, we provide pedagogical implementations of existing methods and compare them on a set of benchmark problems.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130026026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization Methods for Financial Index Tracking: From Theory to Practice","authors":"Konstantinos Benidis, Yiyong Feng, D. Palomar","doi":"10.1561/2400000021","DOIUrl":"https://doi.org/10.1561/2400000021","url":null,"abstract":"Index tracking is a very popular passive investment strategy.Since an index cannot be traded directly, index trackingrefers to the process of creating a portfolio that approximatesits performance. A straightforward way to do that isto purchase all the assets that compose an index in appropriatequantities. However, to simplify the execution, avoidsmall and illiquid positions, and large transaction costs, it isdesired that the tracking portfolio consists of a small numberof assets, i.e., we wish to create a sparse portfolio.Although index tracking is driven from the financial industry,it is in fact a pure signal processing problem: a regression ofthe financial historical data subject to some portfolio constraintswith some caveats and particularities. Furthermore, the sparse index tracking problem is similar to many sparsityformulations in the signal processing area in the sense thatit is a regression problem with some sparsity requirements.In its original form, sparse index tracking can be formulatedas a combinatorial optimization problem. A commonly usedapproach is to use mixed-integer programming MIP tosolve small sized problems. Nevertheless, MIP solvers are notapplicable for high-dimensional problems since the runningtime can be prohibiting for practical use.The goal of this monograph is to provide an in-depth overviewof the index tracking problem and analyze all the caveats andpractical issues an investor might have, such as the frequentrebalancing of weights, the changes in the index composition,the transaction costs, etc. Furthermore, a unified frameworkfor a large variety of sparse index tracking formulations isprovided. The derived algorithms are very attractive forpractical use since they provide efficient tracking portfoliosorders of magnitude faster than MIP solvers.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122201102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Many Faces of Degeneracy in Conic Optimization","authors":"D. Drusvyatskiy, Henry Wolkowicz","doi":"10.1561/2400000011","DOIUrl":"https://doi.org/10.1561/2400000011","url":null,"abstract":"Slater's condition -- existence of a \"strictly feasible solution\" -- is a common assumption in conic optimization. Without strict feasibility, first-order optimality conditions may be meaningless, the dual problem may yield little information about the primal, and small changes in the data may render the problem infeasible. Hence, failure of strict feasibility can negatively impact off-the-shelf numerical methods, such as primal-dual interior point methods, in particular. New optimization modelling techniques and convex relaxations for hard nonconvex problems have shown that the loss of strict feasibility is a more pronounced phenomenon than has previously been realized. In this text, we describe various reasons for the loss of strict feasibility, whether due to poor modelling choices or (more interestingly) rich underlying structure, and discuss ways to cope with it and, in many pronounced cases, how to use it as an advantage. In large part, we emphasize the facial reduction preprocessing technique due to its mathematical elegance, geometric transparency, and computational potential.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133327708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen P. Boyd, Enzo Busseti, Steven Diamond, R. N. Kahn, Kwangmoo Koh, P. Nystrup, Jan Speth
{"title":"Multi-Period Trading via Convex Optimization","authors":"Stephen P. Boyd, Enzo Busseti, Steven Diamond, R. N. Kahn, Kwangmoo Koh, P. Nystrup, Jan Speth","doi":"10.1561/2400000023","DOIUrl":"https://doi.org/10.1561/2400000023","url":null,"abstract":"We consider a basic model of multi-period trading, which can be used to evaluate the performance of a trading strategy. We describe a framework for single-period optimization, where the trades in each period are found by solving a convex optimization problem that trades off expected return, risk, transaction cost and holding cost such as the borrowing cost for shorting assets. We then describe a multi-period version of the trading method, where optimization is used to plan a sequence of trades, with only the first one executed, using estimates of future quantities that are unknown when the trades are chosen. The single-period method traces back to Markowitz; the multi-period methods trace back to model predictive control. Our contribution is to describe the single-period and multi-period methods in one simple framework, giving a clear description of the development and the approximations made. In this paper we do not address a critical component in a trading algorithm, the predictions or forecasts of future quantities. The methods we describe in this paper can be thought of as good ways to exploit predictions, no matter how they are made. We have also developed a companion open-source software library that implements many of the ideas and methods described in the paper.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114779912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to Online Convex Optimization","authors":"Elad Hazan","doi":"10.1561/2400000013","DOIUrl":"https://doi.org/10.1561/2400000013","url":null,"abstract":"This monograph portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132796739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-Rank Semidefinite Programming: Theory and Applications","authors":"A. Lemon, A. M. So, Y. Ye","doi":"10.1561/2400000009","DOIUrl":"https://doi.org/10.1561/2400000009","url":null,"abstract":"Finding low-rank solutions of semidefinite programs is important in many applications. For example, semidefinite programs that arise as relaxations of polynomial optimization problems are exact relaxations when the semidefinite program has a rank-1 solution. Unfortunately, computing a minimum-rank solution of a semidefinite program is an NP-hard problem. In this paper we review the theory of low-rank semidefinite programming, presenting theorems that guarantee the existence of a low-rank solution, heuristics for computing low-rank solutions, and algorithms for finding low-rank approximate solutions. Then we present applications of the theory to trust-region problems and signal processing.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125091627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Chordal Graphs and Semidefinite Optimization","authors":"L. Vandenberghe, Martin S. Andersen","doi":"10.1561/2400000006","DOIUrl":"https://doi.org/10.1561/2400000006","url":null,"abstract":"Chordal graphs play a central role in techniques for exploiting sparsityin large semidefinite optimization problems and in related convexoptimization problems involving sparse positive semidefinite matrices.Chordal graph properties are also fundamental to several classicalresults in combinatorial optimization, linear algebra, statistics,signal processing, machine learning, and nonlinear optimization. Thissurvey covers the theory and applications of chordal graphs, with anemphasis on algorithms developed in the literature on sparse Choleskyfactorization. These algorithms are formulated as recursions on eliminationtrees, supernodal elimination trees, or clique trees associatedwith the graph. The best known example is the multifrontal Choleskyfactorization algorithm, but similar algorithms can be formulated fora variety of related problems, including the computation of the partialinverse of a sparse positive definite matrix, positive semidefinite andEuclidean distance matrix completion problems, and the evaluation ofgradients and Hessians of logarithmic barriers for cones of sparse positivesemidefinite matrices and their dual cones. The purpose of thesurvey is to show how these techniques can be applied in algorithmsfor sparse semidefinite optimization, and to point out the connectionswith related topics outside semidefinite optimization, such as probabilisticnetworks, matrix completion problems, and partial separabilityin nonlinear optimization.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116871797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proximal Algorithms","authors":"Neal Parikh, Stephen P. Boyd","doi":"10.1561/2400000003","DOIUrl":"https://doi.org/10.1561/2400000003","url":null,"abstract":"This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.","PeriodicalId":329329,"journal":{"name":"Found. Trends Optim.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121018556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}