Verification of Neural Networks for Safety Critical Applications

K. Khalifa, M. Safar, M. El-Kharashi
{"title":"Verification of Neural Networks for Safety Critical Applications","authors":"K. Khalifa, M. Safar, M. El-Kharashi","doi":"10.1109/ICM50269.2020.9331504","DOIUrl":null,"url":null,"abstract":"In recent years, Neural Networks (NNs) have been widely adopted in engineering automated driving systems with examples in perception, decision-making, or even end-to-end scenarios. As these systems are safety-critical in nature, they are too complex and hard to verify. For using neural networks in safety-critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in the training process. Verifying a trained neural network is to measure the extent of the decisions made by the neural network, which are based on prior similarities in the training process. In this paper, we propose a runtime monitoring that can measure the reliability of the neural network trained to classify a new input based on prior similarities in the training set. In the training process, the runtime monitor stores the values of the neurons of certain layers, which represent the neurons activation pattern for each example in the training data. We use the Binary Decision Diagrams (BDDs) formal technique to store the neuron activation patterns in binary form. In the inference process, a classification decision measured by a hamming distance is made to any new input by examining if the runtime monitor contains a similar neurons activation pattern. If the runtime monitor does not contain any similar activation pattern, it generates a warning that the decision is not based on prior similarities in the training data. Unlike previous work, we monitored more layers to allow for more neurons activation pattern of each input. We demonstrate our approach using the MNIST benchmark set. Our experimental results show that by adjusting the hamming distance, 75.63% of the misclassified labels are unseen activation patterns, which are not similar to any stored activation patterns from the training time.","PeriodicalId":243968,"journal":{"name":"2020 32nd International Conference on Microelectronics (ICM)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 32nd International Conference on Microelectronics (ICM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICM50269.2020.9331504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In recent years, Neural Networks (NNs) have been widely adopted in engineering automated driving systems with examples in perception, decision-making, or even end-to-end scenarios. As these systems are safety-critical in nature, they are too complex and hard to verify. For using neural networks in safety-critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in the training process. Verifying a trained neural network is to measure the extent of the decisions made by the neural network, which are based on prior similarities in the training process. In this paper, we propose a runtime monitoring that can measure the reliability of the neural network trained to classify a new input based on prior similarities in the training set. In the training process, the runtime monitor stores the values of the neurons of certain layers, which represent the neurons activation pattern for each example in the training data. We use the Binary Decision Diagrams (BDDs) formal technique to store the neuron activation patterns in binary form. In the inference process, a classification decision measured by a hamming distance is made to any new input by examining if the runtime monitor contains a similar neurons activation pattern. If the runtime monitor does not contain any similar activation pattern, it generates a warning that the decision is not based on prior similarities in the training data. Unlike previous work, we monitored more layers to allow for more neurons activation pattern of each input. We demonstrate our approach using the MNIST benchmark set. Our experimental results show that by adjusting the hamming distance, 75.63% of the misclassified labels are unseen activation patterns, which are not similar to any stored activation patterns from the training time.
神经网络在安全关键应用中的验证
近年来,神经网络(Neural Networks, NNs)被广泛应用于工程自动驾驶系统中,在感知、决策甚至端到端场景中都有应用。由于这些系统本质上对安全至关重要,它们太复杂,难以验证。对于在安全关键领域使用神经网络,重要的是要知道神经网络做出的决定是否得到训练过程中先验相似性的支持。验证训练好的神经网络是衡量神经网络所做决策的程度,这些决策是基于训练过程中的先验相似性。在本文中,我们提出了一种运行时监控,可以测量神经网络训练的可靠性,以根据训练集中的先验相似性对新输入进行分类。在训练过程中,运行监视器存储某一层神经元的值,代表训练数据中每个样例的神经元激活模式。我们使用二进制决策图(bdd)形式化技术以二进制形式存储神经元的激活模式。在推理过程中,通过检查运行时监视器是否包含相似的神经元激活模式,以汉明距离衡量对任何新输入的分类决策。如果运行时监视器不包含任何类似的激活模式,它会生成一个警告,表示该决策不是基于训练数据中的先前相似性。与以前的工作不同,我们监测了更多的层,以允许每个输入的更多神经元激活模式。我们使用MNIST基准集来演示我们的方法。实验结果表明,通过调整汉明距离,75.63%的误分类标签是未见的激活模式,与训练时间存储的激活模式不相似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信