Valentine Wargnier-Dauchelle , Thomas Grenier , Françoise Durand-Dubief , François Cotton , Michaël Sdika
{"title":"Explainable monotonic networks and constrained learning for interpretable classification and weakly supervised anomaly detection","authors":"Valentine Wargnier-Dauchelle , Thomas Grenier , Françoise Durand-Dubief , François Cotton , Michaël Sdika","doi":"10.1016/j.patcog.2024.111186","DOIUrl":null,"url":null,"abstract":"<div><div>Deep networks interpretability is fundamental in critical domains like medicine: using easily explainable networks with decisions based on radiological signs and not on spurious confounders would reassure the clinicians. Confidence is reinforced by the integration of intrinsic properties and characteristics of monotonic networks could be used to design such intrinsically explainable networks. As they are considered as too constrained and difficult to train, they are often very shallow and rarely used for image applications. In this work, we propose a procedure to transform any architecture into a trainable monotonic network, identifying the critical importance of weights initialization, and highlight the interest of such networks for explicability and interpretability. By constraining the features and gradients of a healthy vs pathological images classifier, we show, using counterfactual examples, that the network decision is more based on radiological signs of the pathology and outperform state-of-the-art weakly supervised anomaly detection methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111186"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320324009373","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep networks interpretability is fundamental in critical domains like medicine: using easily explainable networks with decisions based on radiological signs and not on spurious confounders would reassure the clinicians. Confidence is reinforced by the integration of intrinsic properties and characteristics of monotonic networks could be used to design such intrinsically explainable networks. As they are considered as too constrained and difficult to train, they are often very shallow and rarely used for image applications. In this work, we propose a procedure to transform any architecture into a trainable monotonic network, identifying the critical importance of weights initialization, and highlight the interest of such networks for explicability and interpretability. By constraining the features and gradients of a healthy vs pathological images classifier, we show, using counterfactual examples, that the network decision is more based on radiological signs of the pathology and outperform state-of-the-art weakly supervised anomaly detection methods.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.