{"title":"区间目标编程的新型行为惩罚函数与后优化分析","authors":"Mohamed Sadok Cherif","doi":"10.1016/j.dajour.2024.100511","DOIUrl":null,"url":null,"abstract":"<div><p>Goal programming (GP) is a multi-objective extension of linear programming. Interval GP (IGP) is one of the earliest methods to expand the range of preferred structures in GP. The decision maker’s (DM’s) utility or preference in IGP is investigated by incorporating a widening range of underlying utility functions, commonly known as penalty functions. The basic idea of these functions is that undesirable deviations from the target levels of the goals are penalized regarding a constant or variable penalty value. The main concern with introducing the penalty functions is providing a wide range of a priori preference structures. Yet, the evaluation of how undesirable deviations are penalized based on DM’s behavioral preferences is not sufficiently addressed in the penalty function types developed in the GP literature. In real-world scenarios involving risk, the achievement levels of decision-making attributes are typically associated with the behavior of the DM. In such scenarios, the DM’s unavoidable attitude toward risk should be integrated into the decision-making process. We introduce the concept of behavioral penalty functions into the IGP approach, incorporating a risk aversion parameter tailored to the nature of each attribute to address this gap. This concept offers an innovative framework for capturing the preferences of the DMs and their various attitudes toward risk within the IGP approach. In this paper, we first introduce the concept of behavioral penalty functions. Next, we develop a behavioral utility-based IGP model. Finally, we present a portfolio selection case study to demonstrate the applicability and efficacy of the proposed procedure, followed by a post-optimality analysis and comparisons with other GP approaches.</p></div>","PeriodicalId":100357,"journal":{"name":"Decision Analytics Journal","volume":"12 ","pages":"Article 100511"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772662224001152/pdfft?md5=54776e19883a23fdb187347a6c1d0b14&pid=1-s2.0-S2772662224001152-main.pdf","citationCount":"0","resultStr":"{\"title\":\"A novel behavioral penalty function for interval goal programming with post-optimality analysis\",\"authors\":\"Mohamed Sadok Cherif\",\"doi\":\"10.1016/j.dajour.2024.100511\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Goal programming (GP) is a multi-objective extension of linear programming. Interval GP (IGP) is one of the earliest methods to expand the range of preferred structures in GP. The decision maker’s (DM’s) utility or preference in IGP is investigated by incorporating a widening range of underlying utility functions, commonly known as penalty functions. The basic idea of these functions is that undesirable deviations from the target levels of the goals are penalized regarding a constant or variable penalty value. The main concern with introducing the penalty functions is providing a wide range of a priori preference structures. Yet, the evaluation of how undesirable deviations are penalized based on DM’s behavioral preferences is not sufficiently addressed in the penalty function types developed in the GP literature. In real-world scenarios involving risk, the achievement levels of decision-making attributes are typically associated with the behavior of the DM. In such scenarios, the DM’s unavoidable attitude toward risk should be integrated into the decision-making process. We introduce the concept of behavioral penalty functions into the IGP approach, incorporating a risk aversion parameter tailored to the nature of each attribute to address this gap. This concept offers an innovative framework for capturing the preferences of the DMs and their various attitudes toward risk within the IGP approach. In this paper, we first introduce the concept of behavioral penalty functions. Next, we develop a behavioral utility-based IGP model. Finally, we present a portfolio selection case study to demonstrate the applicability and efficacy of the proposed procedure, followed by a post-optimality analysis and comparisons with other GP approaches.</p></div>\",\"PeriodicalId\":100357,\"journal\":{\"name\":\"Decision Analytics Journal\",\"volume\":\"12 \",\"pages\":\"Article 100511\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2772662224001152/pdfft?md5=54776e19883a23fdb187347a6c1d0b14&pid=1-s2.0-S2772662224001152-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Decision Analytics Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772662224001152\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Analytics Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772662224001152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
目标规划(GP)是线性规划的多目标扩展。区间 GP(IGP)是最早在 GP 中扩展首选结构范围的方法之一。在 IGP 中,决策者(DM)的效用或偏好是通过扩大基础效用函数(通常称为惩罚函数)的范围来研究的。这些函数的基本思想是,对于偏离目标水平的不理想情况,会以恒定或可变的惩罚值进行惩罚。引入惩罚函数的主要目的是提供广泛的先验偏好结构。然而,在 GP 文献中开发的惩罚函数类型并没有充分考虑到如何根据 DM 的行为偏好对不良偏差进行惩罚。在涉及风险的现实场景中,决策属性的实现水平通常与 DM 的行为相关联。在这种情况下,应该将管理者对风险的不可避免的态度纳入决策过程。我们在 IGP 方法中引入了行为惩罚函数的概念,根据每个属性的性质量身定制了一个风险规避参数,以弥补这一不足。这一概念提供了一个创新框架,可在 IGP 方法中捕捉 DMs 的偏好及其对风险的各种态度。在本文中,我们首先介绍了行为惩罚函数的概念。然后,我们建立了一个基于行为效用的 IGP 模型。最后,我们介绍了一个投资组合选择案例研究,以展示所建议程序的适用性和有效性,随后进行了优化后分析,并与其他 GP 方法进行了比较。
A novel behavioral penalty function for interval goal programming with post-optimality analysis
Goal programming (GP) is a multi-objective extension of linear programming. Interval GP (IGP) is one of the earliest methods to expand the range of preferred structures in GP. The decision maker’s (DM’s) utility or preference in IGP is investigated by incorporating a widening range of underlying utility functions, commonly known as penalty functions. The basic idea of these functions is that undesirable deviations from the target levels of the goals are penalized regarding a constant or variable penalty value. The main concern with introducing the penalty functions is providing a wide range of a priori preference structures. Yet, the evaluation of how undesirable deviations are penalized based on DM’s behavioral preferences is not sufficiently addressed in the penalty function types developed in the GP literature. In real-world scenarios involving risk, the achievement levels of decision-making attributes are typically associated with the behavior of the DM. In such scenarios, the DM’s unavoidable attitude toward risk should be integrated into the decision-making process. We introduce the concept of behavioral penalty functions into the IGP approach, incorporating a risk aversion parameter tailored to the nature of each attribute to address this gap. This concept offers an innovative framework for capturing the preferences of the DMs and their various attitudes toward risk within the IGP approach. In this paper, we first introduce the concept of behavioral penalty functions. Next, we develop a behavioral utility-based IGP model. Finally, we present a portfolio selection case study to demonstrate the applicability and efficacy of the proposed procedure, followed by a post-optimality analysis and comparisons with other GP approaches.