Classification Confidence in Exploratory Learning: A User's Guide

IF 4 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
P. Salamon, David Salamon, V. A. Cantu, Michelle An, Tyler Perry, Robert A. Edwards, A. Segall
{"title":"Classification Confidence in Exploratory Learning: A User's Guide","authors":"P. Salamon, David Salamon, V. A. Cantu, Michelle An, Tyler Perry, Robert A. Edwards, A. Segall","doi":"10.3390/make5030043","DOIUrl":null,"url":null,"abstract":"This paper investigates the post-hoc calibration of confidence for “exploratory” machine learning classification problems. The difficulty in these problems stems from the continuing desire to push the boundaries of which categories have enough examples to generalize from when curating datasets, and confusion regarding the validity of those categories. We argue that for such problems the “one-versus-all” approach (top-label calibration) must be used rather than the “calibrate-the-full-response-matrix” approach advocated elsewhere in the literature. We introduce and test four new algorithms designed to handle the idiosyncrasies of category-specific confidence estimation using only the test set and the final model. Chief among these methods is the use of kernel density ratios for confidence calibration including a novel algorithm for choosing the bandwidth. We test our claims and explore the limits of calibration on a bioinformatics application (PhANNs) as well as the classic MNIST benchmark. Finally, our analysis argues that post-hoc calibration should always be performed, may be performed using only the test dataset, and should be sanity-checked visually.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"33 1","pages":"803-829"},"PeriodicalIF":4.0000,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning and knowledge extraction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/make5030043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This paper investigates the post-hoc calibration of confidence for “exploratory” machine learning classification problems. The difficulty in these problems stems from the continuing desire to push the boundaries of which categories have enough examples to generalize from when curating datasets, and confusion regarding the validity of those categories. We argue that for such problems the “one-versus-all” approach (top-label calibration) must be used rather than the “calibrate-the-full-response-matrix” approach advocated elsewhere in the literature. We introduce and test four new algorithms designed to handle the idiosyncrasies of category-specific confidence estimation using only the test set and the final model. Chief among these methods is the use of kernel density ratios for confidence calibration including a novel algorithm for choosing the bandwidth. We test our claims and explore the limits of calibration on a bioinformatics application (PhANNs) as well as the classic MNIST benchmark. Finally, our analysis argues that post-hoc calibration should always be performed, may be performed using only the test dataset, and should be sanity-checked visually.
探索性学习中的分类信心:用户指南
本文研究了“探索性”机器学习分类问题的置信度事后校准。这些问题的困难在于,在管理数据集时,人们一直希望突破哪些类别有足够的例子可以概括的界限,以及对这些类别的有效性感到困惑。我们认为,对于此类问题,必须使用“一个对所有”的方法(顶级标签校准),而不是文献中其他地方提倡的“校准全响应矩阵”方法。我们介绍并测试了四种新的算法,这些算法设计用于仅使用测试集和最终模型来处理特定类别置信估计的特性。这些方法中最主要的是使用核密度比进行置信度校准,其中包括一种选择带宽的新算法。我们测试了我们的主张,并探索了生物信息学应用程序(PhANNs)以及经典MNIST基准的校准限制。最后,我们的分析认为,事后校准应该始终执行,可以只使用测试数据集执行,并且应该进行视觉上的安全性检查。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.30
自引率
0.00%
发文量
0
审稿时长
7 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信