NeuRecover: Regression-Controlled Repair of Deep Neural Networks with Training History

Sho Tokui, Susumu Tokumoto, Akihito Yoshii, F. Ishikawa, Takao Nakagawa, Kazuki Munakata, Shinji Kikuchi
{"title":"NeuRecover: Regression-Controlled Repair of Deep Neural Networks with Training History","authors":"Sho Tokui, Susumu Tokumoto, Akihito Yoshii, F. Ishikawa, Takao Nakagawa, Kazuki Munakata, Shinji Kikuchi","doi":"10.48550/arXiv.2203.00191","DOIUrl":null,"url":null,"abstract":"Systematic techniques to improve quality of deep neural networks (DNNs) are critical given the increasing demand for practical applications including safety-critical ones. The key challenge comes from the little controllability in updating DNNs. Retraining to fix some behavior often has a destructive impact on other behavior, causing regressions, i.e., the updated DNN fails with inputs correctly handled by the original one. This problem is crucial when engineers are required to investigate failures in intensive assurance activities for safety or trust. Search-based repair techniques for DNNs have potentials to tackle this challenge by enabling localized updates only on “responsible parameters” inside the DNN. However, the potentials have not been explored to realize sufficient controllability to suppress regressions in DNN repair tasks. In this paper, we propose a novel DNN repair method that makes use of the training history for judging which DNN parameters should be changed or not to suppress regressions. We implemented the method into a tool called Neurecover and evaluated it with three datasets. Our method outperformed the existing method by achieving often less than a quarter, even a tenth in some cases, number of regressions. Our method is especially effective when the repair requirements are tight to fix specific failure types. In such cases, our method showed stably low rates (<2 %) of regressions, which were in many cases a tenth of regressions caused by retraining.","PeriodicalId":437520,"journal":{"name":"2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2203.00191","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Systematic techniques to improve quality of deep neural networks (DNNs) are critical given the increasing demand for practical applications including safety-critical ones. The key challenge comes from the little controllability in updating DNNs. Retraining to fix some behavior often has a destructive impact on other behavior, causing regressions, i.e., the updated DNN fails with inputs correctly handled by the original one. This problem is crucial when engineers are required to investigate failures in intensive assurance activities for safety or trust. Search-based repair techniques for DNNs have potentials to tackle this challenge by enabling localized updates only on “responsible parameters” inside the DNN. However, the potentials have not been explored to realize sufficient controllability to suppress regressions in DNN repair tasks. In this paper, we propose a novel DNN repair method that makes use of the training history for judging which DNN parameters should be changed or not to suppress regressions. We implemented the method into a tool called Neurecover and evaluated it with three datasets. Our method outperformed the existing method by achieving often less than a quarter, even a tenth in some cases, number of regressions. Our method is especially effective when the repair requirements are tight to fix specific failure types. In such cases, our method showed stably low rates (<2 %) of regressions, which were in many cases a tenth of regressions caused by retraining.
具有训练历史的深度神经网络的回归控制修复
考虑到包括安全关键应用在内的实际应用日益增长的需求,提高深度神经网络(dnn)质量的系统技术至关重要。关键的挑战来自更新dnn的可控性。重新训练以修复某些行为通常会对其他行为产生破坏性影响,导致回归,即更新的DNN在原始DNN正确处理输入的情况下失败。当工程师需要调查安全或信任密集保证活动中的故障时,这个问题至关重要。基于搜索的DNN修复技术有可能通过只对DNN内部的“负责参数”进行局部更新来解决这一挑战。然而,尚未探索实现足够的可控性以抑制DNN修复任务中的回归的潜力。在本文中,我们提出了一种新的DNN修复方法,利用训练历史来判断哪些DNN参数应该改变或不应该抑制回归。我们将该方法应用到一个名为Neurecover的工具中,并使用三个数据集对其进行评估。我们的方法优于现有的方法,在某些情况下,实现的回归数量通常不到四分之一,甚至十分之一。当修理要求严格,无法修复特定类型的故障时,我们的方法特别有效。在这种情况下,我们的方法显示出稳定的低回归率(< 2%),在许多情况下,这是由再训练引起的回归的十分之一。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信