{"title":"用于决策的可解释gmdh型神经网络:以医学诊断为例","authors":"L. Jakaite, V. Schetinin","doi":"10.1016/j.asoc.2025.113607","DOIUrl":null,"url":null,"abstract":"<div><div>In medical diagnostics, the use of interpretable artificial neural networks (ANN) is crucial to enabling healthcare professionals to make informed decisions that consider risks, especially when faced with uncertainties in patient data and expert opinions. Despite advances, conventional ANNs often produce complex, not transparent models that limit interpretability, particularly in medical contexts where transparency is essential. Existing methods, such as decision trees and random forests, provide some interpretability but struggle with inconsistent medical data and fail to adequately quantify decision uncertainty. This paper introduces a novel Group Method of Data Handling (GMDH)-type neural network approach that addresses these gaps by generating concise, interpretable decision models based on the self-organizing concept. The proposed method builds multilayer networks using two-argument logical functions, ensuring explainability and minimizing the negative impact of human intervention. The method employs a selection criterion to incrementally grow networks, optimizing complexity while reducing validation errors. The algorithm’s convergence is proven through a bounded, monotonically decreasing error sequence, ensuring reliable solutions. Having been tested in complex diagnostic cases, including infectious endocarditis, systemic red lupus, and postoperative outcomes in acute appendicitis, the method achieved high expert agreement scores (Fleiss’s kappa of 0.98 (95% CI 0.97-0.99) and 0.86 (95% CI 0.83-0.89), respectively) compared to random forests (0.84 and 0.71). These results demonstrate statistically significant improvements (<span><math><mrow><mi>p</mi><mo><</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>), highlighting the method’s ability to produce interpretable rules that reflect uncertainties and improve the reliability of decisions. Having demonstrated a transparent and robust framework for medical decision-making, the proposed approach bridges the gap between model accuracy and interpretability, providing practitioners with reliable insights and confidence estimates required for making risk-aware decisions.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"182 ","pages":"Article 113607"},"PeriodicalIF":7.2000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable GMDH-type neural networks for decision making: Case of medical diagnostics\",\"authors\":\"L. Jakaite, V. Schetinin\",\"doi\":\"10.1016/j.asoc.2025.113607\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In medical diagnostics, the use of interpretable artificial neural networks (ANN) is crucial to enabling healthcare professionals to make informed decisions that consider risks, especially when faced with uncertainties in patient data and expert opinions. Despite advances, conventional ANNs often produce complex, not transparent models that limit interpretability, particularly in medical contexts where transparency is essential. Existing methods, such as decision trees and random forests, provide some interpretability but struggle with inconsistent medical data and fail to adequately quantify decision uncertainty. This paper introduces a novel Group Method of Data Handling (GMDH)-type neural network approach that addresses these gaps by generating concise, interpretable decision models based on the self-organizing concept. The proposed method builds multilayer networks using two-argument logical functions, ensuring explainability and minimizing the negative impact of human intervention. The method employs a selection criterion to incrementally grow networks, optimizing complexity while reducing validation errors. The algorithm’s convergence is proven through a bounded, monotonically decreasing error sequence, ensuring reliable solutions. Having been tested in complex diagnostic cases, including infectious endocarditis, systemic red lupus, and postoperative outcomes in acute appendicitis, the method achieved high expert agreement scores (Fleiss’s kappa of 0.98 (95% CI 0.97-0.99) and 0.86 (95% CI 0.83-0.89), respectively) compared to random forests (0.84 and 0.71). These results demonstrate statistically significant improvements (<span><math><mrow><mi>p</mi><mo><</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>), highlighting the method’s ability to produce interpretable rules that reflect uncertainties and improve the reliability of decisions. Having demonstrated a transparent and robust framework for medical decision-making, the proposed approach bridges the gap between model accuracy and interpretability, providing practitioners with reliable insights and confidence estimates required for making risk-aware decisions.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"182 \",\"pages\":\"Article 113607\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625009184\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625009184","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
在医疗诊断中,可解释人工神经网络(ANN)的使用对于医疗保健专业人员做出考虑风险的明智决策至关重要,特别是在面对患者数据和专家意见的不确定性时。尽管取得了进步,但传统的人工神经网络经常产生复杂的、不透明的模型,限制了可解释性,特别是在透明度至关重要的医疗环境中。现有的方法,如决策树和随机森林,提供了一些可解释性,但与不一致的医疗数据作斗争,不能充分量化决策不确定性。本文介绍了一种新颖的数据处理群方法(GMDH)型神经网络方法,该方法通过基于自组织概念生成简洁、可解释的决策模型来解决这些差距。该方法使用双参数逻辑函数构建多层网络,保证了网络的可解释性,并将人为干预的负面影响降至最低。该方法采用一个选择标准来逐步增长网络,优化复杂性,同时减少验证错误。通过有界、单调递减的误差序列证明了算法的收敛性,保证了算法解的可靠性。在包括感染性心内膜炎、系统性红斑狼疮和急性阑尾炎术后结果在内的复杂诊断病例中进行了测试,与随机森林(0.84和0.71)相比,该方法获得了较高的专家一致性评分(Fleiss kappa分别为0.98 (95% CI 0.97-0.99)和0.86 (95% CI 0.83-0.89)。这些结果显示了统计上显著的改进(p<0.05),突出了该方法产生反映不确定性和提高决策可靠性的可解释规则的能力。在展示了透明和稳健的医疗决策框架后,所提出的方法弥合了模型准确性和可解释性之间的差距,为从业人员提供了做出风险意识决策所需的可靠见解和信心估计。
Explainable GMDH-type neural networks for decision making: Case of medical diagnostics
In medical diagnostics, the use of interpretable artificial neural networks (ANN) is crucial to enabling healthcare professionals to make informed decisions that consider risks, especially when faced with uncertainties in patient data and expert opinions. Despite advances, conventional ANNs often produce complex, not transparent models that limit interpretability, particularly in medical contexts where transparency is essential. Existing methods, such as decision trees and random forests, provide some interpretability but struggle with inconsistent medical data and fail to adequately quantify decision uncertainty. This paper introduces a novel Group Method of Data Handling (GMDH)-type neural network approach that addresses these gaps by generating concise, interpretable decision models based on the self-organizing concept. The proposed method builds multilayer networks using two-argument logical functions, ensuring explainability and minimizing the negative impact of human intervention. The method employs a selection criterion to incrementally grow networks, optimizing complexity while reducing validation errors. The algorithm’s convergence is proven through a bounded, monotonically decreasing error sequence, ensuring reliable solutions. Having been tested in complex diagnostic cases, including infectious endocarditis, systemic red lupus, and postoperative outcomes in acute appendicitis, the method achieved high expert agreement scores (Fleiss’s kappa of 0.98 (95% CI 0.97-0.99) and 0.86 (95% CI 0.83-0.89), respectively) compared to random forests (0.84 and 0.71). These results demonstrate statistically significant improvements (), highlighting the method’s ability to produce interpretable rules that reflect uncertainties and improve the reliability of decisions. Having demonstrated a transparent and robust framework for medical decision-making, the proposed approach bridges the gap between model accuracy and interpretability, providing practitioners with reliable insights and confidence estimates required for making risk-aware decisions.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.