Robust Neural Network Training Using Inverted Probability Distribution

Teerapaun Tanprasert, T. Tanprasert
{"title":"Robust Neural Network Training Using Inverted Probability Distribution","authors":"Teerapaun Tanprasert, T. Tanprasert","doi":"10.1145/3426826.3426827","DOIUrl":null,"url":null,"abstract":"This paper presents strategies to tweak the probability distribution of the data set to bias the training process of a neural network for a better learning outcome. For a real-world problem, provided that the probability distribution of the population can be assumed, the training set can be sampled from the population in such a way that its probability distribution satisfies certain targeted characteristics. For example, if the boundary between classes is critical to the training outcome, a larger proportion of training data may be drawn from the area around the boundaries. On the other hand, if the learning outcome is aimed at resembling a common concept encoded in the training set, learning from the data near the norm may be more effective. In order to explore the effectiveness of the various strategies, the concept was applied to two problems: 3-spiral and wine quality. Experimental results suggest that, whether the problem requires an emphasis on classifying boundary or recognizing the central pattern, our novel sampling strategy – inverted probability distribution – performs exceptionally well.","PeriodicalId":202857,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Machine Learning and Machine Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 3rd International Conference on Machine Learning and Machine Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3426826.3426827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents strategies to tweak the probability distribution of the data set to bias the training process of a neural network for a better learning outcome. For a real-world problem, provided that the probability distribution of the population can be assumed, the training set can be sampled from the population in such a way that its probability distribution satisfies certain targeted characteristics. For example, if the boundary between classes is critical to the training outcome, a larger proportion of training data may be drawn from the area around the boundaries. On the other hand, if the learning outcome is aimed at resembling a common concept encoded in the training set, learning from the data near the norm may be more effective. In order to explore the effectiveness of the various strategies, the concept was applied to two problems: 3-spiral and wine quality. Experimental results suggest that, whether the problem requires an emphasis on classifying boundary or recognizing the central pattern, our novel sampling strategy – inverted probability distribution – performs exceptionally well.
基于倒概率分布的鲁棒神经网络训练
本文提出了调整数据集概率分布的策略,以使神经网络的训练过程产生偏差,从而获得更好的学习结果。对于现实世界的问题,如果可以假设总体的概率分布,则可以从总体中抽样训练集,使其概率分布满足某些目标特征。例如,如果类之间的边界对训练结果至关重要,则可能从边界周围的区域提取更大比例的训练数据。另一方面,如果学习结果的目标是类似于训练集中编码的公共概念,那么从接近规范的数据中学习可能更有效。为了探索各种策略的有效性,将该概念应用于两个问题:3-螺旋和葡萄酒质量。实验结果表明,无论问题是需要强调分类边界还是识别中心模式,我们的新采样策略-倒概率分布-都表现得非常好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信