Distributionally Robust Deep Learning using Hardness Weighted Sampling

Lucas Fidon, S. Ourselin, T. Vercauteren
{"title":"Distributionally Robust Deep Learning using Hardness Weighted Sampling","authors":"Lucas Fidon, S. Ourselin, T. Vercauteren","doi":"10.59275/j.melba.2022-8b6a","DOIUrl":null,"url":null,"abstract":"Limiting failures of machine learning systems is of paramount importance for safety-critical applications. In order to improve the robustness of machine learning systems, Distributionally Robust Optimization (DRO) has been proposed as a generalization of Empirical Risk Minimization (ERM). However, its use in deep learning has been severely restricted due to the relative inefficiency of the optimizers available for DRO in comparison to the wide-spread variants of Stochastic Gradient Descent (SGD) optimizers for ERM.We propose SGD with hardness weighted sampling, a principled and efficient optimization method for DRO in machine learning that is particularly suited in the context of deep learning. Similar to a hard example mining strategy in practice, the proposed algorithm is straightforward to implement and computationally as efficient as SGD-based optimizers used for deep learning, requiring minimal overhead computation. In contrast to typical ad hoc hard mining approaches, we prove the convergence of our DRO algorithm for over-parameterized deep learning networks with ReLU activation and finite number of layers and parameters.Our experiments on fetal brain 3D MRI segmentation and brain tumor segmentation in MRI demonstrate the feasibility and the usefulness of our approach. Using our hardness weighted sampling for training a state-of-the-art deep learning pipeline leads to improved robustness to anatomical variabilities in automatic fetal brain 3D MRI segmentation using deep learning and to improved robustness to the image protocol variations in brain tumor segmentation.a decrease of 2% of the interquartile range of the Dice scores for the enhanced tumor and the tumor core regions.Our code is available at https://github.com/LucasFidon/HardnessWeightedSampler","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"148 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The journal of machine learning for biomedical imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.59275/j.melba.2022-8b6a","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Limiting failures of machine learning systems is of paramount importance for safety-critical applications. In order to improve the robustness of machine learning systems, Distributionally Robust Optimization (DRO) has been proposed as a generalization of Empirical Risk Minimization (ERM). However, its use in deep learning has been severely restricted due to the relative inefficiency of the optimizers available for DRO in comparison to the wide-spread variants of Stochastic Gradient Descent (SGD) optimizers for ERM.We propose SGD with hardness weighted sampling, a principled and efficient optimization method for DRO in machine learning that is particularly suited in the context of deep learning. Similar to a hard example mining strategy in practice, the proposed algorithm is straightforward to implement and computationally as efficient as SGD-based optimizers used for deep learning, requiring minimal overhead computation. In contrast to typical ad hoc hard mining approaches, we prove the convergence of our DRO algorithm for over-parameterized deep learning networks with ReLU activation and finite number of layers and parameters.Our experiments on fetal brain 3D MRI segmentation and brain tumor segmentation in MRI demonstrate the feasibility and the usefulness of our approach. Using our hardness weighted sampling for training a state-of-the-art deep learning pipeline leads to improved robustness to anatomical variabilities in automatic fetal brain 3D MRI segmentation using deep learning and to improved robustness to the image protocol variations in brain tumor segmentation.a decrease of 2% of the interquartile range of the Dice scores for the enhanced tumor and the tumor core regions.Our code is available at https://github.com/LucasFidon/HardnessWeightedSampler
基于硬度加权抽样的分布鲁棒深度学习
限制机器学习系统的故障对于安全关键应用至关重要。为了提高机器学习系统的鲁棒性,分布式鲁棒优化(DRO)被提出作为经验风险最小化(ERM)的推广。然而,与广泛使用的随机梯度下降(SGD)优化器相比,用于DRO的优化器相对效率低下,因此它在深度学习中的使用受到了严重限制。我们提出了具有硬度加权抽样的SGD,这是机器学习中DRO的一种原则性和高效的优化方法,特别适合深度学习的背景。与实践中的硬示例挖掘策略类似,所提出的算法易于实现,并且计算效率与用于深度学习的基于sgd的优化器一样高,需要最小的开销计算。与典型的临时硬挖掘方法相比,我们证明了我们的DRO算法对于具有ReLU激活和有限层数和参数的过参数化深度学习网络的收敛性。通过对胎儿脑三维MRI分割和脑肿瘤MRI分割的实验,验证了该方法的可行性和实用性。使用我们的硬度加权采样来训练最先进的深度学习管道,可以提高使用深度学习的自动胎儿脑3D MRI分割中解剖变异的鲁棒性,并提高对脑肿瘤分割中图像协议变化的鲁棒性。增强肿瘤和肿瘤核心区域的Dice分数的四分位数范围减少了2%。我们的代码可在https://github.com/LucasFidon/HardnessWeightedSampler上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信