{"title":"On Non-Linearity and Convergence in Non-Linear Least Squares","authors":"O. Kurt","doi":"10.5772/INTECHOPEN.76313","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.76313","url":null,"abstract":"To interpret and explain the mechanism of an engineering problem, the redundant observations are carried out by scientists and engineers. The functional relationships between the observations and parameters defining the model are generally nonlinear. Those relationships are constituted by a nonlinear equation system. The equations of the system are not solved without using linearization of them on the computer. If the linearized equations are consistent, the solution of the system is ensured for a probably global minimum quickly by any approximated values of the parameters in the least squares (LS). Otherwise, namely an inconsistent case, the convergence of the solution needs to be well-determined approximate values for the global minimum solution even if in LS. A numerical example for 3D space fixes coordinates of an artificial global navigation satellite system (GNSS) satellite modeled by a simple combination of firstdegree polynomial and first-order trigonometric functions will be given. It will be shown by the real example that the convergence of the solution depends on the approximated values of the model parameters.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122230962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Gao, Yida Xu, J. Barreiro‐Gomez, Massa Ndong, Michail Smyrnakis, Hamidou TembineJian Gao, M. Smyrnakis, H. Tembine
{"title":"Distributionally Robust Optimization","authors":"Jian Gao, Yida Xu, J. Barreiro‐Gomez, Massa Ndong, Michail Smyrnakis, Hamidou TembineJian Gao, M. Smyrnakis, H. Tembine","doi":"10.5772/INTECHOPEN.76686","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.76686","url":null,"abstract":"This chapter presents a class of distributionally robust optimization problems in which a decision-maker has to choose an action in an uncertain environment. The decision-maker has a continuous action space and aims to learn her optimal strategy. The true distribution of the uncertainty is unknown to the decision-maker. This chapter provides alternative ways to select a distribution based on empirical observations of the decision-maker. This leads to a distributionally robust optimization problem. Simple algorithms, whose dynamics are inspired from the gradient flows, are proposed to find local optima. The method is extended to a class of optimization problems with orthogonal constraints and coupled constraints over the simplex set and polytopes. The designed dynamics do not use the projection operator and are able to satisfy both upper- and lower-bound constraints. The convergence rate of the algorithm to generalized evolutionarily stable strategy is derived using a mean regret estimate. Illustrative examples are provided.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128223108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polyhedral Complementarity Approach to Equilibrium Problem in Linear Exchange Models","authors":"V. Shmyrev","doi":"10.5772/INTECHOPEN.77206","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.77206","url":null,"abstract":"New development of original approach to the equilibrium problem in a linear exchange model and its variations is presented. The conceptual base of this approach is the scheme of polyhedral complementarity. The idea is fundamentally different from the well-known reduction to a linear complementarity problem. It may be treated as a realization of the main idea of the linear and quadratic programming methods. In this way, the finite algorithms for finding the equilibrium prices are obtained. The whole process is a successive consideration of different structures of possible solution. They are analogous to basic sets in the simplex method. The approach reveals a decreasing property of the associated mapping whose fixed point yields the equilibrium of the model. The basic methods were generalized for some variations of the linear exchange model.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134645505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multicriteria Support for Group Decision Making","authors":"Andrzej Łodziński","doi":"10.5772/INTECHOPEN.79935","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.79935","url":null,"abstract":"This chapter presents the support method for group decision making. A group decision is when a group of people has to make one joint decision. Each member of the group has his own assessment of a joint decision. The decision making of a group decision is modeled as a multicriteria optimization problem where the respective evaluation functions are the assessment of a joint decision by each member. The interactive analysis that is based on the reference point method applied to the multicriteria problems allows to find effective solutions matching the group ’ s preferences. Each member of the group is able to verify results of every decision. The chapter presents an example of an application of the support method in the selection of the group decision.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122223610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Piecewise Parallel Optimal Algorithm","authors":"Z. Zhu, Gefei Shi","doi":"10.5772/INTECHOPEN.76625","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.76625","url":null,"abstract":"This chapter studies a new optimal algorithm that can be implemented in a piecewise parallel manner onboard spacecraft, where the capacity of onboard computers is limited. The proposed algorithm contains two phases. The predicting phase deals with the openloop state trajectory optimization with simplified system model and evenly discretized time interval of the state trajectory. The tracking phase concerns the closed-loop optimal tracking control for the optimal reference trajectory with full system model subject to real space perturbations. The finite receding horizon control method is used in the tracking program. The optimal control problems in both programs are solved by a direct collocation method based on the discretized Hermite–Simpson method with coincident nodes. By considering the convergence of system error, the current closed-loop control tracking interval and next open-loop control predicting interval are processed simultaneously. Two cases are simulated with the proposed algorithm to validate the effectiveness of proposed algorithm. The numerical results show that the proposed parallel optimal algorithm is very effective in dealing with the optimal control problems for complex nonlinear dynamic systems in aerospace engineering area.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121033822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Gradient Multiobjective Particle Swarm Optimization","authors":"Hong-gui Han, Lu Zhang, J. Qiao","doi":"10.5772/INTECHOPEN.76306","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.76306","url":null,"abstract":"An adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is developed to improve the computation performance. In this AGMOPSO algorithm, the MOG method is devised to update the archive to improve the convergence speed and the local exploitation in the evolutionary process. Attributed to the MOGmethod, this AGMOPSO algorithm not only has faster convergence speed and higher accuracy but also its solutions have better diversity. Additionally, the convergence is discussed to confirm the prerequisite of any successful application of AGMOPSO. Finally, with regard to the computation performance, the proposed AGMOPSO algorithm is compared with some other multiobjective particle swarm optimization (MOPSO) algorithms and two state-of-the-art multiobjective algorithms. The results demonstrate that the proposed AGMOPSO algorithm can find better spread of solutions and have faster convergence to the true Pareto-optimal front.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116249111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bilevel Disjunctive Optimization on Affine Manifolds","authors":"C. Udrişte, H. Bonnel, I. Ţevy, Ali SapeehRasheed","doi":"10.5772/INTECHOPEN.75643","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.75643","url":null,"abstract":"Bilevel optimization is a special kind of optimization where one problem is embedded within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lowerlevel optimization task. These problems involve two kinds of variables: upper-level variables and lower-level variables. Bilevel optimization was first realized in the field of game theory by a German economist von Stackelberg who published a book (1934) that described this hierarchical problem. Now the bilevel optimization problems are commonly found in a number of real-world problems: transportation, economics, decision science, business, engineering, and so on. In this chapter, we provide a general formulation for bilevel disjunctive optimization problem on affine manifolds. These problems contain two levels of optimization tasks where one optimization task is nested within the other. The outer optimization problem is commonly referred to as the leaders (upper level) optimization problem and the inner optimization problem is known as the followers (or lower level) optimization problem. The two levels have their own objectives and constraints. Topics affine convex functions, optimizations with auto-parallel restrictions, affine convexity of posynomial functions, bilevel disjunctive problem and algorithm, models of bilevel disjunctive programming problems, and properties of minimum functions.","PeriodicalId":337657,"journal":{"name":"Optimization Algorithms - Examples","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115408583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}