{"title":"神经网络在安全关键应用中的验证","authors":"K. Khalifa, M. Safar, M. El-Kharashi","doi":"10.1109/ICM50269.2020.9331504","DOIUrl":null,"url":null,"abstract":"In recent years, Neural Networks (NNs) have been widely adopted in engineering automated driving systems with examples in perception, decision-making, or even end-to-end scenarios. As these systems are safety-critical in nature, they are too complex and hard to verify. For using neural networks in safety-critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in the training process. Verifying a trained neural network is to measure the extent of the decisions made by the neural network, which are based on prior similarities in the training process. In this paper, we propose a runtime monitoring that can measure the reliability of the neural network trained to classify a new input based on prior similarities in the training set. In the training process, the runtime monitor stores the values of the neurons of certain layers, which represent the neurons activation pattern for each example in the training data. We use the Binary Decision Diagrams (BDDs) formal technique to store the neuron activation patterns in binary form. In the inference process, a classification decision measured by a hamming distance is made to any new input by examining if the runtime monitor contains a similar neurons activation pattern. If the runtime monitor does not contain any similar activation pattern, it generates a warning that the decision is not based on prior similarities in the training data. Unlike previous work, we monitored more layers to allow for more neurons activation pattern of each input. We demonstrate our approach using the MNIST benchmark set. Our experimental results show that by adjusting the hamming distance, 75.63% of the misclassified labels are unseen activation patterns, which are not similar to any stored activation patterns from the training time.","PeriodicalId":243968,"journal":{"name":"2020 32nd International Conference on Microelectronics (ICM)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Verification of Neural Networks for Safety Critical Applications\",\"authors\":\"K. Khalifa, M. Safar, M. El-Kharashi\",\"doi\":\"10.1109/ICM50269.2020.9331504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, Neural Networks (NNs) have been widely adopted in engineering automated driving systems with examples in perception, decision-making, or even end-to-end scenarios. As these systems are safety-critical in nature, they are too complex and hard to verify. For using neural networks in safety-critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in the training process. Verifying a trained neural network is to measure the extent of the decisions made by the neural network, which are based on prior similarities in the training process. In this paper, we propose a runtime monitoring that can measure the reliability of the neural network trained to classify a new input based on prior similarities in the training set. In the training process, the runtime monitor stores the values of the neurons of certain layers, which represent the neurons activation pattern for each example in the training data. We use the Binary Decision Diagrams (BDDs) formal technique to store the neuron activation patterns in binary form. In the inference process, a classification decision measured by a hamming distance is made to any new input by examining if the runtime monitor contains a similar neurons activation pattern. If the runtime monitor does not contain any similar activation pattern, it generates a warning that the decision is not based on prior similarities in the training data. Unlike previous work, we monitored more layers to allow for more neurons activation pattern of each input. We demonstrate our approach using the MNIST benchmark set. Our experimental results show that by adjusting the hamming distance, 75.63% of the misclassified labels are unseen activation patterns, which are not similar to any stored activation patterns from the training time.\",\"PeriodicalId\":243968,\"journal\":{\"name\":\"2020 32nd International Conference on Microelectronics (ICM)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 32nd International Conference on Microelectronics (ICM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICM50269.2020.9331504\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 32nd International Conference on Microelectronics (ICM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICM50269.2020.9331504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Verification of Neural Networks for Safety Critical Applications
In recent years, Neural Networks (NNs) have been widely adopted in engineering automated driving systems with examples in perception, decision-making, or even end-to-end scenarios. As these systems are safety-critical in nature, they are too complex and hard to verify. For using neural networks in safety-critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in the training process. Verifying a trained neural network is to measure the extent of the decisions made by the neural network, which are based on prior similarities in the training process. In this paper, we propose a runtime monitoring that can measure the reliability of the neural network trained to classify a new input based on prior similarities in the training set. In the training process, the runtime monitor stores the values of the neurons of certain layers, which represent the neurons activation pattern for each example in the training data. We use the Binary Decision Diagrams (BDDs) formal technique to store the neuron activation patterns in binary form. In the inference process, a classification decision measured by a hamming distance is made to any new input by examining if the runtime monitor contains a similar neurons activation pattern. If the runtime monitor does not contain any similar activation pattern, it generates a warning that the decision is not based on prior similarities in the training data. Unlike previous work, we monitored more layers to allow for more neurons activation pattern of each input. We demonstrate our approach using the MNIST benchmark set. Our experimental results show that by adjusting the hamming distance, 75.63% of the misclassified labels are unseen activation patterns, which are not similar to any stored activation patterns from the training time.