{"title":"A multi-objective optimization design to generate surrogate machine learning models in explainable artificial intelligence applications","authors":"Wellington Rodrigo Monteiro , Gilberto Reynoso-Meza","doi":"10.1016/j.ejdp.2023.100040","DOIUrl":null,"url":null,"abstract":"<div><p>Decision-making is crucial to the performance and well-being of any organization. While artificial intelligence algorithms are increasingly used in the industry for decision-making purposes, the adoption of decision-making techniques to develop new artificial intelligence models does not follow the same trend. Complex artificial intelligence algorithm structures such as gradient boosting, ensembles, and neural networks offer higher accuracy at the expense of transparency. In organizations, however, managers and other stakeholders need to understand how an algorithm came to a given decision to properly criticize, learn from, audit, and improve said algorithms. Among the most recent techniques to address this, explainable artificial intelligence (XAI) algorithms offer a previously unforeseen level of interpretability, explainability, and informativeness to different human roles in the industry. XAI algorithms seek to balance the trade-off between interpretability and accuracy by introducing techniques that, for instance, explain the feature relevance in complex algorithms, generate counterfactual examples in “what-if?” analyses, and train surrogate models that are intrinsically explainable. However, while the trade-off between these two objectives is commonly referred to in the literature, only some proposals use multi-objective optimization in XAI applications. Therefore, this document proposes a new multi-objective optimization application to help decision-makers (for instance, data scientists) to generate new surrogate machine learning models based on black-box models. These surrogates are generated by a multi-objective problem that maximizes, at the same time, interpretability and accuracy. The proposed application also has a multi-criteria decision-making step to rank the best surrogates considering these two objectives. Results from five classification and regression datasets tested on four black-box models show that the proposed method can create simple surrogates maintaining high levels of accuracy.</p></div>","PeriodicalId":44104,"journal":{"name":"EURO Journal on Decision Processes","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2193943823000134/pdfft?md5=c0cfb4113c9d5700533e1ba3c3d4dfd1&pid=1-s2.0-S2193943823000134-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURO Journal on Decision Processes","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2193943823000134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
Abstract
Decision-making is crucial to the performance and well-being of any organization. While artificial intelligence algorithms are increasingly used in the industry for decision-making purposes, the adoption of decision-making techniques to develop new artificial intelligence models does not follow the same trend. Complex artificial intelligence algorithm structures such as gradient boosting, ensembles, and neural networks offer higher accuracy at the expense of transparency. In organizations, however, managers and other stakeholders need to understand how an algorithm came to a given decision to properly criticize, learn from, audit, and improve said algorithms. Among the most recent techniques to address this, explainable artificial intelligence (XAI) algorithms offer a previously unforeseen level of interpretability, explainability, and informativeness to different human roles in the industry. XAI algorithms seek to balance the trade-off between interpretability and accuracy by introducing techniques that, for instance, explain the feature relevance in complex algorithms, generate counterfactual examples in “what-if?” analyses, and train surrogate models that are intrinsically explainable. However, while the trade-off between these two objectives is commonly referred to in the literature, only some proposals use multi-objective optimization in XAI applications. Therefore, this document proposes a new multi-objective optimization application to help decision-makers (for instance, data scientists) to generate new surrogate machine learning models based on black-box models. These surrogates are generated by a multi-objective problem that maximizes, at the same time, interpretability and accuracy. The proposed application also has a multi-criteria decision-making step to rank the best surrogates considering these two objectives. Results from five classification and regression datasets tested on four black-box models show that the proposed method can create simple surrogates maintaining high levels of accuracy.