利用子类学习改进深度学习中的不确定性估计

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Dimitrios Spanos, Nikolaos Passalis, Anastasios Tefas
{"title":"利用子类学习改进深度学习中的不确定性估计","authors":"Dimitrios Spanos,&nbsp;Nikolaos Passalis,&nbsp;Anastasios Tefas","doi":"10.1016/j.neucom.2025.130954","DOIUrl":null,"url":null,"abstract":"<div><div>Machine learning is becoming increasingly popular across various applications and has led to state-of-the-art results, but it faces challenges related to its trustworthiness. One aspect of making deep learning models more trustworthy is improving their ability to estimate the uncertainty of whether a sample is from the in-domain (ID) data distribution or not. Especially, neural networks have a tendency to make overly confident extrapolations and struggle to convey their uncertainty, which can limit their trustworthiness. Recent approaches have employed Radial Basis Function (RBF)-based models with great success in improving uncertainty estimation in Deep Learning. However, such models assume a unimodal distribution of the data for each class, which we show is critical for out-of-distribution sample detection, but can be limiting in many real world cases. To overcome these limitations, in this paper, we propose a method for training a deep model utilizing the inherent different modalities that naturally arise in a class in real data, which we call <em>subclasses</em>, leading to improved uncertainty quantification. The proposed method leverages a variance-preserving reconstruction-based representation learning approach that prevents feature collapse and enables robust discovery of subclasses, further improving the effectiveness of the proposed approach. The improvement of the approach is demonstrated using extensive experiments on several datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130954"},"PeriodicalIF":5.5000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Leveraging subclass learning for improving uncertainty estimation in deep Learning\",\"authors\":\"Dimitrios Spanos,&nbsp;Nikolaos Passalis,&nbsp;Anastasios Tefas\",\"doi\":\"10.1016/j.neucom.2025.130954\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Machine learning is becoming increasingly popular across various applications and has led to state-of-the-art results, but it faces challenges related to its trustworthiness. One aspect of making deep learning models more trustworthy is improving their ability to estimate the uncertainty of whether a sample is from the in-domain (ID) data distribution or not. Especially, neural networks have a tendency to make overly confident extrapolations and struggle to convey their uncertainty, which can limit their trustworthiness. Recent approaches have employed Radial Basis Function (RBF)-based models with great success in improving uncertainty estimation in Deep Learning. However, such models assume a unimodal distribution of the data for each class, which we show is critical for out-of-distribution sample detection, but can be limiting in many real world cases. To overcome these limitations, in this paper, we propose a method for training a deep model utilizing the inherent different modalities that naturally arise in a class in real data, which we call <em>subclasses</em>, leading to improved uncertainty quantification. The proposed method leverages a variance-preserving reconstruction-based representation learning approach that prevents feature collapse and enables robust discovery of subclasses, further improving the effectiveness of the proposed approach. The improvement of the approach is demonstrated using extensive experiments on several datasets.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"651 \",\"pages\":\"Article 130954\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225016261\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225016261","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

机器学习在各种应用中越来越受欢迎,并带来了最先进的结果,但它面临着与可信度相关的挑战。使深度学习模型更值得信赖的一个方面是提高其估计样本是否来自域内(ID)数据分布的不确定性的能力。特别是,神经网络倾向于做出过于自信的推断,并努力传达他们的不确定性,这可能会限制他们的可信度。近年来,基于径向基函数(RBF)的模型在改进深度学习中的不确定性估计方面取得了巨大成功。然而,这些模型假设每个类的数据都是单峰分布,我们认为这对于分布外样本检测至关重要,但在许多现实世界的情况下可能会受到限制。为了克服这些限制,在本文中,我们提出了一种方法来训练一个深度模型,利用在真实数据中自然出现的类中固有的不同模式,我们称之为子类,从而改进不确定性量化。所提出的方法利用基于方差保持重构的表示学习方法,防止特征崩溃并实现子类的鲁棒发现,进一步提高了所提出方法的有效性。在多个数据集上进行了大量的实验,证明了该方法的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Leveraging subclass learning for improving uncertainty estimation in deep Learning
Machine learning is becoming increasingly popular across various applications and has led to state-of-the-art results, but it faces challenges related to its trustworthiness. One aspect of making deep learning models more trustworthy is improving their ability to estimate the uncertainty of whether a sample is from the in-domain (ID) data distribution or not. Especially, neural networks have a tendency to make overly confident extrapolations and struggle to convey their uncertainty, which can limit their trustworthiness. Recent approaches have employed Radial Basis Function (RBF)-based models with great success in improving uncertainty estimation in Deep Learning. However, such models assume a unimodal distribution of the data for each class, which we show is critical for out-of-distribution sample detection, but can be limiting in many real world cases. To overcome these limitations, in this paper, we propose a method for training a deep model utilizing the inherent different modalities that naturally arise in a class in real data, which we call subclasses, leading to improved uncertainty quantification. The proposed method leverages a variance-preserving reconstruction-based representation learning approach that prevents feature collapse and enables robust discovery of subclasses, further improving the effectiveness of the proposed approach. The improvement of the approach is demonstrated using extensive experiments on several datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信