Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida
{"title":"评估数据驱动模糊模型的可解释性:在工业回归问题中的应用","authors":"Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida","doi":"10.1111/exsy.13710","DOIUrl":null,"url":null,"abstract":"<p>Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision-making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy-based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co-firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy-based models: (i) a model designed with Fuzzy c-Means and Least Squares Method, (ii) Adaptive-Network-based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero-Order Takagi-Sugeno (GAM-ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy-based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 12","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing interpretability of data-driven fuzzy models: Application in industrial regression problems\",\"authors\":\"Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida\",\"doi\":\"10.1111/exsy.13710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision-making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy-based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co-firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy-based models: (i) a model designed with Fuzzy c-Means and Least Squares Method, (ii) Adaptive-Network-based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero-Order Takagi-Sugeno (GAM-ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy-based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.</p>\",\"PeriodicalId\":51053,\"journal\":{\"name\":\"Expert Systems\",\"volume\":\"41 12\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/exsy.13710\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.13710","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Assessing interpretability of data-driven fuzzy models: Application in industrial regression problems
Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision-making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy-based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co-firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy-based models: (i) a model designed with Fuzzy c-Means and Least Squares Method, (ii) Adaptive-Network-based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero-Order Takagi-Sugeno (GAM-ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy-based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.
期刊介绍:
Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper.
As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.