{"title":"A Lagrangian shape and topology optimization framework based on semi-discrete optimal transport","authors":"Charles Dapogny, Bruno Levy, Edouard Oudet","doi":"arxiv-2409.07873","DOIUrl":"https://doi.org/arxiv-2409.07873","url":null,"abstract":"This article revolves around shape and topology optimization, in the\u0000applicative context where the objective and constraint functionals depend on\u0000the solution to a physical boundary value problem posed on the optimized\u0000domain. We introduce a novel framework based on modern concepts from\u0000computational geometry, optimal transport and numerical analysis. Its pivotal\u0000feature is a representation of the optimized shape by the cells of an adapted\u0000version of a Laguerre diagram. Although such objects are originally described\u0000by a collection of seed points and weights, recent results from optimal\u0000transport theory suggest a more intuitive parametrization in terms of the seed\u0000points and measures of the associated cells. The polygonal mesh of the shape\u0000induced by this diagram serves as support for the deployment of the Virtual\u0000Element Method for the numerical solution of the physical boundary value\u0000problem at play and the calculation of the objective and constraint\u0000functionals. The sensitivities of the latter are derived next; at first, we\u0000calculate their derivatives with respect to the positions of the vertices of\u0000the Laguerre diagram by shape calculus techniques; a suitable adjoint\u0000methodology is then developed to express them in terms of the seed points and\u0000cell measures of the diagram. The evolution of the shape is realized by first\u0000updating the design variables according to these sensitivities and then\u0000reconstructing the diagram with efficient algorithms from computational\u0000geometry. Our shape optimization strategy is versatile: it can be applied to a\u0000wide gammut of physical situations. It is Lagrangian by essence, and it thereby\u0000benefits from all the assets of a consistently meshed representation of the\u0000shape. Yet, it naturally handles dramatic motions, including topological\u0000changes, in a very robust fashion. These features, among others, are\u0000illustrated by a series of 2d numerical examples.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On time-inconsistent extended mean-field control problems with common noise","authors":"Zongxia Liang, Xiang Yu, Keyu Zhang","doi":"arxiv-2409.07219","DOIUrl":"https://doi.org/arxiv-2409.07219","url":null,"abstract":"This paper addresses a class of time-inconsistent mean field control (MFC)\u0000problems in the presence of common noise under non-exponential discount, where\u0000the coefficients of the McKean-Vlasov dynamics depend on the conditional joint\u0000distribution of the state and control. We investigate the closed-loop\u0000time-consistent equilibrium strategies for these extended MFC problems and\u0000provide a sufficient and necessary condition for their characterization.\u0000Furthermore, we derive a master equation system that provides an equivalent\u0000characterization of our problem. We then apply these results to the\u0000time-inconsistent linear quadratic (LQ) MFC problems, characterizing the\u0000equilibrium strategies in terms of the solution to a non-local Riccati system.\u0000To illustrate these findings, two financial applications are presented.\u0000Finally, a non-LQ example is also discussed in which the closed-loop\u0000equilibrium strategy can be explicitly characterized and verified.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two Decentralized Conjugate Gradient Methods with Global Convergence","authors":"Liping Wang, Hao Wu, Hongchao Zhang","doi":"arxiv-2409.07122","DOIUrl":"https://doi.org/arxiv-2409.07122","url":null,"abstract":"This paper considers the decentralized optimization problem of minimizing a\u0000finite sum of continuously differentiable functions over a fixed-connected\u0000undirected network. Summarizing the lack of previously developed decentralized\u0000conjugate gradient methods, we propose two decentralized conjugate gradient\u0000method, called NDCG and DMBFGS respectively. Firstly, the best of our\u0000knowledge, NDCG is the first decentralized conjugate gradient method to be\u0000shown to have global convergence with constant stepsizes for general nonconvex\u0000optimization problems, which profits from our designed conjugate parameter and\u0000relies only on the same mild conditions as the centralized conjugate gradient\u0000method. Secondly, we apply the memoryless BFGS technique and develop the DMBFGS\u0000method. It requires only vector-vector products to capture the curvature\u0000information of Hessian matrices. Under proper choice of stepsizes, DMBFGS has\u0000global linear convergence for solving strongly convex decentralized\u0000optimization problems. Our numerical results show DMBFGS is very efficient\u0000compared with other state-of-the-art methods for solving decentralized\u0000optimization.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Mechanisms for Demand Response: An Indifference Set Approach","authors":"Mohammad Mehrabi, Omer Karaduman, Stefan Wager","doi":"arxiv-2409.07655","DOIUrl":"https://doi.org/arxiv-2409.07655","url":null,"abstract":"The time at which renewable (e.g., solar or wind) energy resources produce\u0000electricity cannot generally be controlled. In many settings, consumers have\u0000some flexibility in their energy consumption needs, and there is growing\u0000interest in demand-response programs that leverage this flexibility to shift\u0000energy consumption to better match renewable production -- thus enabling more\u0000efficient utilization of these resources. We study optimal demand response in a\u0000model where consumers operate home energy management systems (HEMS) that can\u0000compute the \"indifference set\" of energy-consumption profiles that meet\u0000pre-specified consumer objectives, receive demand-response signals from the\u0000grid, and control consumer devices within the indifference set. For example, if\u0000a consumer asks for the indoor temperature to remain between certain upper and\u0000lower bounds, a HEMS could time use of air conditioning or heating to align\u0000with high renewable production when possible. Here, we show that while\u0000price-based mechanisms do not in general achieve optimal demand response, i.e.,\u0000dynamic pricing cannot induce HEMS to choose optimal demand consumption\u0000profiles within the available indifference sets, pricing is asymptotically\u0000optimal in a mean-field limit with a growing number of consumers. Furthermore,\u0000we show that large-sample optimal dynamic prices can be efficiently derived via\u0000an algorithm that only requires querying HEMS about their planned consumption\u0000schedules given different prices. We demonstrate our approach in a grid\u0000simulation powered by OpenDSS, and show that it achieves meaningful demand\u0000response without creating grid instability.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Phase Optimization for PINN Training","authors":"Dimary Moreno López","doi":"arxiv-2409.07296","DOIUrl":"https://doi.org/arxiv-2409.07296","url":null,"abstract":"This work presents an algorithm for training Neural Networks where the loss\u0000function can be decomposed into two non-negative terms to be minimized. The\u0000proposed method is an adaptation of Inexact Restoration algorithms,\u0000constituting a two-phase method that imposes descent conditions. Some\u0000performance tests are carried out in PINN training.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An observability estimate for the wave equation and applications to the Neumann boundary controllability for semi-linear wave equations","authors":"Sue Claret","doi":"arxiv-2409.07214","DOIUrl":"https://doi.org/arxiv-2409.07214","url":null,"abstract":"We give a boundary observability result for a $1$d wave equation with a\u0000potential. We then deduce with a Schauder fixed-point argument the existence of\u0000a Neumann boundary control for a semi-linear wave equation $partial_{tt}y -\u0000partial_{xx}y + f(y) = 0$ under an optimal growth assumption at infinity on\u0000$f$ of the type $sln^2s$. Moreover, assuming additional assumption on $f'$, we\u0000construct a minimizing sequence which converges to a control. Numerical\u0000experiments illustrate the results. This work extends to the Neumann boundary\u0000control case the work of Zuazua in $1993$ and the work of M\"unch and Tr'elat\u0000in $2022$.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible block-iterative analysis for the Frank-Wolfe algorithm","authors":"Gábor Braun, Sebastian Pokutta, Zev Woodstock","doi":"arxiv-2409.06931","DOIUrl":"https://doi.org/arxiv-2409.06931","url":null,"abstract":"We prove that the block-coordinate Frank-Wolfe (BCFW) algorithm converges\u0000with state-of-the-art rates in both convex and nonconvex settings under a very\u0000mild \"block-iterative\" assumption, newly allowing for (I) progress without\u0000activating the most-expensive linear minimization oracle(s), LMO(s), at every\u0000iteration, (II) parallelized updates that do not require all LMOs, and\u0000therefore (III) deterministic parallel update strategies that take into account\u0000the numerical cost of the problem's LMOs. Our results apply for short-step BCFW\u0000as well as an adaptive method for convex functions. New relationships between\u0000updated coordinates and primal progress are proven, and a favorable speedup is\u0000demonstrated using FrankWolfe.jl.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exact SDP relaxations for a class of quadratic programs with finite and infinite quadratic constraints","authors":"Naohiko Arima, Sunyoung Kim, Masakazu Kojima","doi":"arxiv-2409.07213","DOIUrl":"https://doi.org/arxiv-2409.07213","url":null,"abstract":"We investigate exact semidefinite programming (SDP) relaxations for the\u0000problem of minimizing a nonconvex quadratic objective function over a feasible\u0000region defined by both finitely and infinitely many nonconvex quadratic\u0000inequality constraints (semi-infinite QCQPs). Specifically, we present two\u0000sufficient conditions on the feasible region under which the QCQP, with any\u0000quadratic objective function over the feasible region, is equivalent to its SDP\u0000relaxation. The first condition is an extension of a result recently proposed\u0000by the authors (arXiv:2308.05922, to appear in SIAM J. Optim.) from finitely\u0000constrained quadratic programs to semi-infinite QCQPs. The newly introduced\u0000second condition offers a clear geometric characterization of the feasible\u0000region for a broad class of QCQPs that are equivalent to their SDP relaxations.\u0000Several illustrative examples, including quadratic programs with ball-,\u0000parabola-, and hyperbola-based constraints, are also provided.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Reissmann, Yuan Fang, Andrew Ooi, Richard Sandberg
{"title":"Constraining Genetic Symbolic Regression via Semantic Backpropagation","authors":"Maximilian Reissmann, Yuan Fang, Andrew Ooi, Richard Sandberg","doi":"arxiv-2409.07369","DOIUrl":"https://doi.org/arxiv-2409.07369","url":null,"abstract":"Evolutionary symbolic regression approaches are powerful tools that can\u0000approximate an explicit mapping between input features and observation for\u0000various problems. However, ensuring that explored expressions maintain\u0000consistency with domain-specific constraints remains a crucial challenge. While\u0000neural networks are able to employ additional information like conservation\u0000laws to achieve more appropriate and robust approximations, the potential\u0000remains unrealized within genetic algorithms. This disparity is rooted in the\u0000inherent discrete randomness of recombining and mutating to generate new\u0000mapping expressions, making it challenging to maintain and preserve inferred\u0000constraints or restrictions in the course of the exploration. To address this\u0000limitation, we propose an approach centered on semantic backpropagation\u0000incorporated into the Gene Expression Programming (GEP), which integrates\u0000domain-specific properties in a vector representation as corrective feedback\u0000during the evolutionary process. By creating backward rules akin to algorithmic\u0000differentiation and leveraging pre-computed subsolutions, the mechanism allows\u0000the enforcement of any constraint within an expression tree by determining the\u0000misalignment and propagating desired changes back. To illustrate the\u0000effectiveness of constraining GEP through semantic backpropagation, we take the\u0000constraint of physical dimension as an example. This framework is applied to\u0000discovering physical equations from the Feynman lectures. Results have shown\u0000not only an increased likelihood of recovering the original equation but also\u0000notable robustness in the presence of noisy data.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Riemannian Federated Learning via Averaging Gradient Stream","authors":"Zhenwei Huang, Wen Huang, Pratik Jawanpuria, Bamdev Mishra","doi":"arxiv-2409.07223","DOIUrl":"https://doi.org/arxiv-2409.07223","url":null,"abstract":"In recent years, federated learning has garnered significant attention as an\u0000efficient and privacy-preserving distributed learning paradigm. In the\u0000Euclidean setting, Federated Averaging (FedAvg) and its variants are a class of\u0000efficient algorithms for expected (empirical) risk minimization. This paper\u0000develops and analyzes a Riemannian Federated Averaging Gradient Stream\u0000(RFedAGS) algorithm, which is a generalization of FedAvg, to problems defined\u0000on a Riemannian manifold. Under standard assumptions, the convergence rate of\u0000RFedAGS with fixed step sizes is proven to be sublinear for an approximate\u0000stationary solution. If decaying step sizes are used, the global convergence is\u0000established. Furthermore, assuming that the objective obeys the Riemannian\u0000Polyak-{L}ojasiewicz property, the optimal gaps generated by RFedAGS with\u0000fixed step size are linearly decreasing up to a tiny upper bound, meanwhile, if\u0000decaying step sizes are used, then the gaps sublinearly vanish. Numerical simulations conducted on synthetic and real-world data demonstrate\u0000the performance of the proposed RFedAGS.","PeriodicalId":501286,"journal":{"name":"arXiv - MATH - Optimization and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}