Chimeric U-Net – Modifying the standard U-Net towards explainability

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Kenrick Schulze , Felix Peppert , Christof Schütte , Vikram Sunkara
{"title":"Chimeric U-Net – Modifying the standard U-Net towards explainability","authors":"Kenrick Schulze ,&nbsp;Felix Peppert ,&nbsp;Christof Schütte ,&nbsp;Vikram Sunkara","doi":"10.1016/j.artint.2024.104240","DOIUrl":null,"url":null,"abstract":"<div><div>Healthcare guided by semantic segmentation has the potential to improve our quality of life through early and accurate disease detection. Convolutional Neural Networks, especially the U-Net-based architectures, are currently the state-of-the-art learning-based segmentation methods and have given unprecedented performances. However, their decision-making processes are still an active field of research. In order to reliably utilize such methods in healthcare, explainability of how the segmentation was performed is mandated. To date, explainability is studied and applied heavily in classification tasks. In this work, we propose the Chimeric U-Net, a U-Net architecture with an invertible decoder unit, that inherently brings explainability into semantic segmentation tasks. We find that having the restriction of an invertible decoder does not hinder the performance of the segmentation task. However, the invertible decoder helps to disentangle the class information in the latent space embedding and to construct meaningful saliency maps. Furthermore, we found that with a simple k-Nearest-Neighbours classifier, we could predict the Intersection over Union scores of unseen data, demonstrating that the latent space, constructed by the Chimeric U-Net, encodes an interpretable representation of the segmentation quality. Explainability is an emerging field, and in this work, we propose an alternative approach, that is, rather than building tools for explaining a generic architecture, we propose constraints on the architecture which induce explainability. With this approach, we could peer into the architecture to reveal its class correlations and local contextual dependencies, taking an insightful step towards trustworthy and reliable AI. Code to build and utilize the Chimeric U-Net is made available under:</div><div><span><span>https://github.com/kenrickschulze/Chimeric-UNet---Half-invertible-UNet-in-Pytorch</span><svg><path></path></svg></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"338 ","pages":"Article 104240"},"PeriodicalIF":5.1000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370224001760","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Healthcare guided by semantic segmentation has the potential to improve our quality of life through early and accurate disease detection. Convolutional Neural Networks, especially the U-Net-based architectures, are currently the state-of-the-art learning-based segmentation methods and have given unprecedented performances. However, their decision-making processes are still an active field of research. In order to reliably utilize such methods in healthcare, explainability of how the segmentation was performed is mandated. To date, explainability is studied and applied heavily in classification tasks. In this work, we propose the Chimeric U-Net, a U-Net architecture with an invertible decoder unit, that inherently brings explainability into semantic segmentation tasks. We find that having the restriction of an invertible decoder does not hinder the performance of the segmentation task. However, the invertible decoder helps to disentangle the class information in the latent space embedding and to construct meaningful saliency maps. Furthermore, we found that with a simple k-Nearest-Neighbours classifier, we could predict the Intersection over Union scores of unseen data, demonstrating that the latent space, constructed by the Chimeric U-Net, encodes an interpretable representation of the segmentation quality. Explainability is an emerging field, and in this work, we propose an alternative approach, that is, rather than building tools for explaining a generic architecture, we propose constraints on the architecture which induce explainability. With this approach, we could peer into the architecture to reveal its class correlations and local contextual dependencies, taking an insightful step towards trustworthy and reliable AI. Code to build and utilize the Chimeric U-Net is made available under:
https://github.com/kenrickschulze/Chimeric-UNet---Half-invertible-UNet-in-Pytorch
嵌合 U-Net - 为实现可解释性而修改标准 U-Net
以语义分割为指导的医疗保健有望通过早期准确的疾病检测提高我们的生活质量。卷积神经网络,尤其是基于 U-Net 的体系结构,是目前最先进的基于学习的分割方法,并取得了前所未有的性能。然而,它们的决策过程仍是一个活跃的研究领域。为了在医疗保健领域可靠地使用这些方法,必须能够解释如何进行分割。迄今为止,可解释性在分类任务中得到了大量研究和应用。在这项工作中,我们提出了 Chimeric U-Net,一种带有可逆解码器单元的 U-Net 架构,它本质上将可解释性带入了语义分割任务中。我们发现,可反转解码器的限制不会妨碍分割任务的性能。但是,可逆解码器有助于在潜在空间嵌入中分离类别信息,并构建有意义的显著性地图。此外,我们还发现,使用简单的 k-Nearest-Neighbours 分类器,我们就能预测未见数据的交叉点和联盟得分,这表明嵌合 U-Net 构建的潜空间编码了可解释的分割质量表示。可解释性是一个新兴领域,在这项工作中,我们提出了另一种方法,即我们不是为解释通用架构构建工具,而是对架构提出约束,从而诱发可解释性。有了这种方法,我们就可以窥探架构,揭示其类相关性和局部上下文依赖性,从而向可信、可靠的人工智能迈出富有洞察力的一步。构建和使用嵌合型 U-Net 的代码可在以下网址获取:https://github.com/kenrickschulze/Chimeric-UNet---Half-invertible-UNet-in-Pytorch
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence
Artificial Intelligence 工程技术-计算机:人工智能
CiteScore
11.20
自引率
1.40%
发文量
118
审稿时长
8 months
期刊介绍: The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信