“Know What You Know”: Predicting Behavior for Learning-Enabled Systems When Facing Uncertainty

Michael Austin Langford, B. Cheng
{"title":"“Know What You Know”: Predicting Behavior for Learning-Enabled Systems When Facing Uncertainty","authors":"Michael Austin Langford, B. Cheng","doi":"10.1109/SEAMS51251.2021.00020","DOIUrl":null,"url":null,"abstract":"Since deep learning systems do not generalize well when training data is incomplete and missing coverage of corner cases, it is difficult to ensure the robustness of safety-critical self-adaptive systems with deep learning components. Stakeholders require a reasonable level of confidence that a safety-critical system will behave as expected in all contexts. However, uncertainty in the behavior of safety-critical Learning-Enabled Systems (LESs) arises when run-time contexts deviate from training and validation data. To this end, this paper proposes an approach to develop a more robust safety-critical LES by predicting its learned behavior when exposed to uncertainty and thereby enabling mitigating countermeasures for predicted failures. By combining evolutionary computation with machine learning, an automated method is introduced to assess and predict the behavior of an LES when faced with previously unseen environmental conditions. By experimenting with Deep Neural Networks (DNNs) under a variety of adverse environmental changes, the proposed method is compared to a Monte Carlo (i.e., random sampling) method. Results indicate that when Monte Carlo sampling fails to capture uncommon system behavior, the proposed method is better at training behavior models with fewer training examples required.","PeriodicalId":258262,"journal":{"name":"2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS)","volume":"161 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEAMS51251.2021.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Since deep learning systems do not generalize well when training data is incomplete and missing coverage of corner cases, it is difficult to ensure the robustness of safety-critical self-adaptive systems with deep learning components. Stakeholders require a reasonable level of confidence that a safety-critical system will behave as expected in all contexts. However, uncertainty in the behavior of safety-critical Learning-Enabled Systems (LESs) arises when run-time contexts deviate from training and validation data. To this end, this paper proposes an approach to develop a more robust safety-critical LES by predicting its learned behavior when exposed to uncertainty and thereby enabling mitigating countermeasures for predicted failures. By combining evolutionary computation with machine learning, an automated method is introduced to assess and predict the behavior of an LES when faced with previously unseen environmental conditions. By experimenting with Deep Neural Networks (DNNs) under a variety of adverse environmental changes, the proposed method is compared to a Monte Carlo (i.e., random sampling) method. Results indicate that when Monte Carlo sampling fails to capture uncommon system behavior, the proposed method is better at training behavior models with fewer training examples required.
“知道你所知道的”:面对不确定性时预测学习系统的行为
由于深度学习系统在训练数据不完整和缺少对边缘案例的覆盖时不能很好地泛化,因此很难确保具有深度学习组件的安全关键型自适应系统的鲁棒性。利益相关者需要对安全关键系统在所有情况下都能按预期运行有合理的信心。然而,当运行时上下文偏离训练和验证数据时,安全关键型学习启用系统(LESs)行为中的不确定性就会出现。为此,本文提出了一种方法,通过预测其暴露于不确定性时的学习行为来开发更健壮的安全关键型LES,从而能够减轻预测故障的对策。通过将进化计算与机器学习相结合,引入了一种自动化方法来评估和预测LES在面对以前未见过的环境条件时的行为。通过对各种不利环境变化下的深度神经网络(dnn)进行实验,将所提出的方法与蒙特卡罗(即随机抽样)方法进行了比较。结果表明,当蒙特卡罗采样不能捕获不常见的系统行为时,该方法在训练行为模型方面表现较好,所需的训练样例较少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信