评估数据驱动模糊模型的可解释性:在工业回归问题中的应用

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Expert Systems Pub Date : 2024-08-27 DOI:10.1111/exsy.13710
Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida
{"title":"评估数据驱动模糊模型的可解释性:在工业回归问题中的应用","authors":"Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida","doi":"10.1111/exsy.13710","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision‐making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy‐based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co‐firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy‐based models: (i) a model designed with Fuzzy c‐Means and Least Squares Method, (ii) Adaptive‐Network‐based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero‐Order Takagi‐Sugeno (GAM‐ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy‐based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing interpretability of data‐driven fuzzy models: Application in industrial regression problems\",\"authors\":\"Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida\",\"doi\":\"10.1111/exsy.13710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision‐making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy‐based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co‐firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy‐based models: (i) a model designed with Fuzzy c‐Means and Least Squares Method, (ii) Adaptive‐Network‐based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero‐Order Takagi‐Sugeno (GAM‐ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy‐based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.\",\"PeriodicalId\":51053,\"journal\":{\"name\":\"Expert Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1111/exsy.13710\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1111/exsy.13710","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

机器学习(ML)在利用计算机学习方法进行系统建模方面引起了人们的极大兴趣,由于它能够高效地处理大量数据,并做出高度准确的预测或决策,因此被广泛应用于各种先进领域。然而,随着模型复杂性的增加,ML 方法呈现出复杂的结构,而这些结构对用户来说并不总是透明的。从这个意义上说,研究如何抵消这种趋势并探索如何提高这些模型的可解释性是非常重要的,而这恰恰是决策起着核心作用的地方。这项工作通过评估基于模糊的模型的可解释性和可解释性来应对这一挑战。研究了影响模糊系统可解释性的结构和语义因素。针对这一主题研究了各种指标,如基于共燃的可理解性指数(COFCI)、瑙克指数、相似性指数和成员函数中心指数。这些指标在不同的数据集上对三种基于模糊的模型进行了评估:(i) 使用模糊 c-Means 和最小二乘法设计的模型,(ii) 基于自适应网络的模糊推理系统 (ANFIS),以及 (iii) 广义加法模型零阶高木-杉野 (GAM-ZOTS)。这项研究最终提出了一种新的综合可解释性指标,它涵盖了与基于模糊模型的可解释性相关的不同领域。在解决可解释性问题时,面临的挑战之一是如何平衡高准确度和可解释性,因为这两个目标经常会发生冲突。在这种情况下,为了在可解释性和准确性之间找到一个折衷方案,我们使用 4 个数据集,通过改变模型参数,在多种情况下进行了实验评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing interpretability of data‐driven fuzzy models: Application in industrial regression problems
Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision‐making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy‐based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co‐firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy‐based models: (i) a model designed with Fuzzy c‐Means and Least Squares Method, (ii) Adaptive‐Network‐based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero‐Order Takagi‐Sugeno (GAM‐ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy‐based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Expert Systems
Expert Systems 工程技术-计算机:理论方法
CiteScore
7.40
自引率
6.10%
发文量
266
审稿时长
24 months
期刊介绍: Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper. As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信