属性不确定性信息:卷积神经网络可以从输入数据的错误中学习到什么

IF 6.3 2区 物理与天体物理 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Natália V. N. Rodrigues, L. Raul Abramo, Nina S. Hirata
{"title":"属性不确定性信息:卷积神经网络可以从输入数据的错误中学习到什么","authors":"Natália V. N. Rodrigues, L. Raul Abramo, Nina S. Hirata","doi":"10.1088/2632-2153/ad0285","DOIUrl":null,"url":null,"abstract":"Abstract Errors in measurements are key to weighting the value of data, but are often neglected in Machine Learning (ML). 
We show how Convolutional Neural Networks (CNNs) are able to learn about the context and patterns of signal and noise, leading to improvements in the performance of classification methods.
We construct a model whereby two classes of objects follow an underlying Gaussian distribution, and where the features (the input data) have varying, but known, levels of noise -- in other words, each data point has a different error bar.
This model mimics the nature of scientific data sets, such as those from astrophysical surveys, where noise arises as a realization of random processes with known underlying distributions.
The classification of these objects can then be performed using standard statistical techniques (e.g., least-squares minimization or Markov-Chain Monte Carlo), as well as ML techniques. 
This allows us to take advantage of a maximum likelihood approach to object classification, and to measure the amount by which the ML methods are incorporating the information in the input data uncertainties.
We show that, when each data point is subject to different levels of noise (i.e., noises with different distribution functions, which is typically the case in scientific data sets), that information can be learned by the CNNs, raising the ML performance to at least the same level of the least-squares method -- and sometimes even surpassing it.
Furthermore, we show that, with varying noise levels, the confidence of the ML classifiers serves as a proxy for the underlying cumulative distribution function, but only if the information about specific input data uncertainties is provided to the CNNs.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":6.3000,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"The information of attribute uncertainties: what convolutional neural networks can learn about errors in input data\",\"authors\":\"Natália V. N. Rodrigues, L. Raul Abramo, Nina S. Hirata\",\"doi\":\"10.1088/2632-2153/ad0285\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Errors in measurements are key to weighting the value of data, but are often neglected in Machine Learning (ML). 
We show how Convolutional Neural Networks (CNNs) are able to learn about the context and patterns of signal and noise, leading to improvements in the performance of classification methods.
We construct a model whereby two classes of objects follow an underlying Gaussian distribution, and where the features (the input data) have varying, but known, levels of noise -- in other words, each data point has a different error bar.
This model mimics the nature of scientific data sets, such as those from astrophysical surveys, where noise arises as a realization of random processes with known underlying distributions.
The classification of these objects can then be performed using standard statistical techniques (e.g., least-squares minimization or Markov-Chain Monte Carlo), as well as ML techniques. 
This allows us to take advantage of a maximum likelihood approach to object classification, and to measure the amount by which the ML methods are incorporating the information in the input data uncertainties.
We show that, when each data point is subject to different levels of noise (i.e., noises with different distribution functions, which is typically the case in scientific data sets), that information can be learned by the CNNs, raising the ML performance to at least the same level of the least-squares method -- and sometimes even surpassing it.
Furthermore, we show that, with varying noise levels, the confidence of the ML classifiers serves as a proxy for the underlying cumulative distribution function, but only if the information about specific input data uncertainties is provided to the CNNs.\",\"PeriodicalId\":33757,\"journal\":{\"name\":\"Machine Learning Science and Technology\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2023-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine Learning Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2632-2153/ad0285\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2632-2153/ad0285","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2

摘要

测量误差是衡量数据价值的关键,但在机器学习(ML)中往往被忽视。我们展示了卷积神经网络(cnn)如何能够学习信号和噪声的上下文和模式,从而改进分类方法的性能。#xD;我们构建了一个模型,其中两类对象遵循底层高斯分布,其中特征(输入数据)具有不同但已知的噪声水平——换句话说,每个数据点都有不同的误差条。该模型模拟科学数据集的性质,例如来自天体物理调查的数据集,其中噪声是作为具有已知底层分布的随机过程的实现而产生的。然后可以使用标准统计技术(例如,最小二乘最小化或马尔可夫链蒙特卡罗)以及ML技术对这些对象进行分类。这使我们能够利用最大似然方法进行对象分类,并测量ML方法在输入数据不确定性中包含信息的数量。我们表明,当每个数据点受到不同程度的噪声(即具有不同分布函数的噪声,这是科学数据集中的典型情况)时,该信息可以被cnn学习。将机器学习性能提高到至少与最小二乘法相同的水平,有时甚至超过最小二乘法。此外,我们表明,在不同的噪声水平下,机器学习分类器的置信度可以作为潜在累积分布函数的代理,但仅当有关特定输入数据不确定性的信息提供给cnn时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The information of attribute uncertainties: what convolutional neural networks can learn about errors in input data
Abstract Errors in measurements are key to weighting the value of data, but are often neglected in Machine Learning (ML). 
We show how Convolutional Neural Networks (CNNs) are able to learn about the context and patterns of signal and noise, leading to improvements in the performance of classification methods.
We construct a model whereby two classes of objects follow an underlying Gaussian distribution, and where the features (the input data) have varying, but known, levels of noise -- in other words, each data point has a different error bar.
This model mimics the nature of scientific data sets, such as those from astrophysical surveys, where noise arises as a realization of random processes with known underlying distributions.
The classification of these objects can then be performed using standard statistical techniques (e.g., least-squares minimization or Markov-Chain Monte Carlo), as well as ML techniques. 
This allows us to take advantage of a maximum likelihood approach to object classification, and to measure the amount by which the ML methods are incorporating the information in the input data uncertainties.
We show that, when each data point is subject to different levels of noise (i.e., noises with different distribution functions, which is typically the case in scientific data sets), that information can be learned by the CNNs, raising the ML performance to at least the same level of the least-squares method -- and sometimes even surpassing it.
Furthermore, we show that, with varying noise levels, the confidence of the ML classifiers serves as a proxy for the underlying cumulative distribution function, but only if the information about specific input data uncertainties is provided to the CNNs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Machine Learning Science and Technology
Machine Learning Science and Technology Computer Science-Artificial Intelligence
CiteScore
9.10
自引率
4.40%
发文量
86
审稿时长
5 weeks
期刊介绍: Machine Learning Science and Technology is a multidisciplinary open access journal that bridges the application of machine learning across the sciences with advances in machine learning methods and theory as motivated by physical insights. Specifically, articles must fall into one of the following categories: advance the state of machine learning-driven applications in the sciences or make conceptual, methodological or theoretical advances in machine learning with applications to, inspiration from, or motivated by scientific problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信