Alvaro Sierra-Altamiranda, Hadi Charkhgard, Iman Dayarian, Ali Eshragh, Sorna Javadi
{"title":"Learning to project in a criterion space search algorithm: an application to multi-objective binary linear programming","authors":"Alvaro Sierra-Altamiranda, Hadi Charkhgard, Iman Dayarian, Ali Eshragh, Sorna Javadi","doi":"10.1007/s11590-024-02100-5","DOIUrl":null,"url":null,"abstract":"<p>In this paper, we investigate the possibility of improving the performance of multi-objective optimization solution approaches using machine learning techniques. Specifically, we focus on multi-objective binary linear programs and employ one of the most effective and recently developed criterion space search algorithms, the so-called KSA, during our study. This algorithm computes all nondominated points of a problem with <i>p</i> objectives by searching on a projected criterion space, i.e., a <span>\\((p-1)\\)</span>-dimensional criterion apace. We present an effective and fast learning approach to identify on which projected space the KSA should work. We also present several generic features/variables that can be used in machine learning techniques for identifying the best projected space. Finally, we present an effective bi-objective optimization-based heuristic for selecting the subset of the features to overcome the issue of overfitting in learning. Through an extensive computational study over 2000 instances of tri-objective knapsack and assignment problems, we demonstrate that an improvement of up to 18% in time can be achieved by the proposed learning method compared to a random selection of the projected space. To show that the performance of our algorithm is not limited to instances of knapsack and assignment problems with three objective functions, we also report similar performance results when the proposed learning approach is used for solving random binary integer program instances with four objective functions.</p>","PeriodicalId":49720,"journal":{"name":"Optimization Letters","volume":"16 1","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optimization Letters","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s11590-024-02100-5","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we investigate the possibility of improving the performance of multi-objective optimization solution approaches using machine learning techniques. Specifically, we focus on multi-objective binary linear programs and employ one of the most effective and recently developed criterion space search algorithms, the so-called KSA, during our study. This algorithm computes all nondominated points of a problem with p objectives by searching on a projected criterion space, i.e., a \((p-1)\)-dimensional criterion apace. We present an effective and fast learning approach to identify on which projected space the KSA should work. We also present several generic features/variables that can be used in machine learning techniques for identifying the best projected space. Finally, we present an effective bi-objective optimization-based heuristic for selecting the subset of the features to overcome the issue of overfitting in learning. Through an extensive computational study over 2000 instances of tri-objective knapsack and assignment problems, we demonstrate that an improvement of up to 18% in time can be achieved by the proposed learning method compared to a random selection of the projected space. To show that the performance of our algorithm is not limited to instances of knapsack and assignment problems with three objective functions, we also report similar performance results when the proposed learning approach is used for solving random binary integer program instances with four objective functions.
期刊介绍:
Optimization Letters is an international journal covering all aspects of optimization, including theory, algorithms, computational studies, and applications, and providing an outlet for rapid publication of short communications in the field. Originality, significance, quality and clarity are the essential criteria for choosing the material to be published.
Optimization Letters has been expanding in all directions at an astonishing rate during the last few decades. New algorithmic and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace, and our knowledge of all aspects of the field has grown even more profound. At the same time one of the most striking trends in optimization is the constantly increasing interdisciplinary nature of the field.
Optimization Letters aims to communicate in a timely fashion all recent developments in optimization with concise short articles (limited to a total of ten journal pages). Such concise articles will be easily accessible by readers working in any aspects of optimization and wish to be informed of recent developments.