Adjustable robust optimization approach for SVM under uncertainty

IF 6.7 2区 管理学 Q1 MANAGEMENT
F. Hooshmand, F. Seilsepour, S.A. MirHassani
{"title":"Adjustable robust optimization approach for SVM under uncertainty","authors":"F. Hooshmand,&nbsp;F. Seilsepour,&nbsp;S.A. MirHassani","doi":"10.1016/j.omega.2024.103206","DOIUrl":null,"url":null,"abstract":"<div><div>The support vector machine (SVM) is one of the successful approaches to the classification problem. Since the values of features are typically affected by uncertainty, it is important to incorporate uncertainty into the SVM formulation. This paper focuses on developing a robust optimization (RO) model for SVM. A key distinction from existing literature lies in the timing of optimizing decision variables. To the best of our knowledge, in all existing RO models developed for SVM, a common assumption is that all decision variables are decided before the uncertainty realization, which leads to an overly conservative decision boundary. However, this paper adopts a different strategy by determining the variables that assess the misclassification error of data points or their fall within the margin post-realization, resulting in a less conservative model. The RO models where decisions are made in two stages (some before and the rest after the uncertainty resolution), are called adjustable RO models. This adjustment results in a three-level optimization model for which two decomposition-based algorithms are proposed. In these algorithms, after providing a bi-level reformulation, the model is divided into a master-problem (MP) and a sub-problem the interaction of which yields the optimal solution. Acceleration of algorithms via incorporating valid inequalities into MP is another novelty of this paper. Computational results over simulated and real-world datasets confirm the efficiency of the proposed model and algorithms.</div></div>","PeriodicalId":19529,"journal":{"name":"Omega-international Journal of Management Science","volume":"131 ","pages":"Article 103206"},"PeriodicalIF":6.7000,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Omega-international Journal of Management Science","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0305048324001701","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0

Abstract

The support vector machine (SVM) is one of the successful approaches to the classification problem. Since the values of features are typically affected by uncertainty, it is important to incorporate uncertainty into the SVM formulation. This paper focuses on developing a robust optimization (RO) model for SVM. A key distinction from existing literature lies in the timing of optimizing decision variables. To the best of our knowledge, in all existing RO models developed for SVM, a common assumption is that all decision variables are decided before the uncertainty realization, which leads to an overly conservative decision boundary. However, this paper adopts a different strategy by determining the variables that assess the misclassification error of data points or their fall within the margin post-realization, resulting in a less conservative model. The RO models where decisions are made in two stages (some before and the rest after the uncertainty resolution), are called adjustable RO models. This adjustment results in a three-level optimization model for which two decomposition-based algorithms are proposed. In these algorithms, after providing a bi-level reformulation, the model is divided into a master-problem (MP) and a sub-problem the interaction of which yields the optimal solution. Acceleration of algorithms via incorporating valid inequalities into MP is another novelty of this paper. Computational results over simulated and real-world datasets confirm the efficiency of the proposed model and algorithms.
不确定条件下 SVM 的可调整稳健优化方法
支持向量机(SVM)是解决分类问题的成功方法之一。由于特征值通常会受到不确定性的影响,因此将不确定性纳入 SVM 的表述非常重要。本文的重点是为 SVM 开发鲁棒优化 (RO) 模型。与现有文献的关键区别在于优化决策变量的时机。据我们所知,在所有为 SVM 开发的现有 RO 模型中,一个共同的假设是所有决策变量都在不确定性实现之前决定,这导致决策边界过于保守。然而,本文采用了不同的策略,即在实现后确定评估数据点的误分类误差或其落差的变量,从而使模型不那么保守。分两个阶段(部分在不确定性解决之前,其余在不确定性解决之后)做出决策的 RO 模型称为可调整 RO 模型。这种调整会产生一个三级优化模型,为此提出了两种基于分解的算法。在这些算法中,在提供两级重构后,模型被划分为一个主问题(MP)和一个子问题,两者相互作用产生最优解。通过将有效不等式纳入 MP 来加速算法是本文的另一个新颖之处。对模拟和真实世界数据集的计算结果证实了所提模型和算法的效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Omega-international Journal of Management Science
Omega-international Journal of Management Science 管理科学-运筹学与管理科学
CiteScore
13.80
自引率
11.60%
发文量
130
审稿时长
56 days
期刊介绍: Omega reports on developments in management, including the latest research results and applications. Original contributions and review articles describe the state of the art in specific fields or functions of management, while there are shorter critical assessments of particular management techniques. Other features of the journal are the "Memoranda" section for short communications and "Feedback", a correspondence column. Omega is both stimulating reading and an important source for practising managers, specialists in management services, operational research workers and management scientists, management consultants, academics, students and research personnel throughout the world. The material published is of high quality and relevance, written in a manner which makes it accessible to all of this wide-ranging readership. Preference will be given to papers with implications to the practice of management. Submissions of purely theoretical papers are discouraged. The review of material for publication in the journal reflects this aim.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信