{"title":"Leveraging subclass learning for improving uncertainty estimation in deep Learning","authors":"Dimitrios Spanos, Nikolaos Passalis, Anastasios Tefas","doi":"10.1016/j.neucom.2025.130954","DOIUrl":null,"url":null,"abstract":"<div><div>Machine learning is becoming increasingly popular across various applications and has led to state-of-the-art results, but it faces challenges related to its trustworthiness. One aspect of making deep learning models more trustworthy is improving their ability to estimate the uncertainty of whether a sample is from the in-domain (ID) data distribution or not. Especially, neural networks have a tendency to make overly confident extrapolations and struggle to convey their uncertainty, which can limit their trustworthiness. Recent approaches have employed Radial Basis Function (RBF)-based models with great success in improving uncertainty estimation in Deep Learning. However, such models assume a unimodal distribution of the data for each class, which we show is critical for out-of-distribution sample detection, but can be limiting in many real world cases. To overcome these limitations, in this paper, we propose a method for training a deep model utilizing the inherent different modalities that naturally arise in a class in real data, which we call <em>subclasses</em>, leading to improved uncertainty quantification. The proposed method leverages a variance-preserving reconstruction-based representation learning approach that prevents feature collapse and enables robust discovery of subclasses, further improving the effectiveness of the proposed approach. The improvement of the approach is demonstrated using extensive experiments on several datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130954"},"PeriodicalIF":5.5000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225016261","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning is becoming increasingly popular across various applications and has led to state-of-the-art results, but it faces challenges related to its trustworthiness. One aspect of making deep learning models more trustworthy is improving their ability to estimate the uncertainty of whether a sample is from the in-domain (ID) data distribution or not. Especially, neural networks have a tendency to make overly confident extrapolations and struggle to convey their uncertainty, which can limit their trustworthiness. Recent approaches have employed Radial Basis Function (RBF)-based models with great success in improving uncertainty estimation in Deep Learning. However, such models assume a unimodal distribution of the data for each class, which we show is critical for out-of-distribution sample detection, but can be limiting in many real world cases. To overcome these limitations, in this paper, we propose a method for training a deep model utilizing the inherent different modalities that naturally arise in a class in real data, which we call subclasses, leading to improved uncertainty quantification. The proposed method leverages a variance-preserving reconstruction-based representation learning approach that prevents feature collapse and enables robust discovery of subclasses, further improving the effectiveness of the proposed approach. The improvement of the approach is demonstrated using extensive experiments on several datasets.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.