Darryl Hond, H. Asgari, Leonardo Symonds, M. Newman
{"title":"用于人工神经网络分类器性能验证的神经元激活值分层分析","authors":"Darryl Hond, H. Asgari, Leonardo Symonds, M. Newman","doi":"10.1109/ICAA52185.2022.00016","DOIUrl":null,"url":null,"abstract":"Object classification in dynamic, uncontrolled environments is one of the functional elements of safety-critical Autonomous Systems. It is crucial to develop methods for the specification and verification of these elements, and the associated algorithms, in order to gain confidence in the overall safety of Autonomous Systems and their functional and behavioural correctness and adequacy. Artificial Neural Network (ANN) object classifiers must therefore be assured and need to be verified with respect to requirements. A classifier might be required to generalize to a satisfactory extent, in the sense that its classification performance must be maintained at an acceptable level when the input data differs from the training data. This requirement would apply when data received during operation is drawn from a different distribution to the training data. The specification and verification of classifier generalization capability can be based on measures of the dissimilarity between operational and training data. A requirement could state the permitted forms of the relationship between classification performance and a data dissimilarity measure. We have previously proposed such a dissimilarity measure, which we have termed the Neuron Region Distance (NRD). The NRD is a function of network activation values. In this paper, we analyze neuron activation values layer-by-layer across a neural network. This is in order to advance our progress towards the conception of a novel, generalized form of the NRD. This new measure is called the Per Neuron Ranking (PNR) measure. The activation value analysis provides insight into the required formulation of the PNR measure.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Layer-Wise Analysis of Neuron Activation Values for Performance Verification of Artificial Neural Network Classifiers\",\"authors\":\"Darryl Hond, H. Asgari, Leonardo Symonds, M. Newman\",\"doi\":\"10.1109/ICAA52185.2022.00016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object classification in dynamic, uncontrolled environments is one of the functional elements of safety-critical Autonomous Systems. It is crucial to develop methods for the specification and verification of these elements, and the associated algorithms, in order to gain confidence in the overall safety of Autonomous Systems and their functional and behavioural correctness and adequacy. Artificial Neural Network (ANN) object classifiers must therefore be assured and need to be verified with respect to requirements. A classifier might be required to generalize to a satisfactory extent, in the sense that its classification performance must be maintained at an acceptable level when the input data differs from the training data. This requirement would apply when data received during operation is drawn from a different distribution to the training data. The specification and verification of classifier generalization capability can be based on measures of the dissimilarity between operational and training data. A requirement could state the permitted forms of the relationship between classification performance and a data dissimilarity measure. We have previously proposed such a dissimilarity measure, which we have termed the Neuron Region Distance (NRD). The NRD is a function of network activation values. In this paper, we analyze neuron activation values layer-by-layer across a neural network. This is in order to advance our progress towards the conception of a novel, generalized form of the NRD. This new measure is called the Per Neuron Ranking (PNR) measure. The activation value analysis provides insight into the required formulation of the PNR measure.\",\"PeriodicalId\":206047,\"journal\":{\"name\":\"2022 IEEE International Conference on Assured Autonomy (ICAA)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Assured Autonomy (ICAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAA52185.2022.00016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Assured Autonomy (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA52185.2022.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Layer-Wise Analysis of Neuron Activation Values for Performance Verification of Artificial Neural Network Classifiers
Object classification in dynamic, uncontrolled environments is one of the functional elements of safety-critical Autonomous Systems. It is crucial to develop methods for the specification and verification of these elements, and the associated algorithms, in order to gain confidence in the overall safety of Autonomous Systems and their functional and behavioural correctness and adequacy. Artificial Neural Network (ANN) object classifiers must therefore be assured and need to be verified with respect to requirements. A classifier might be required to generalize to a satisfactory extent, in the sense that its classification performance must be maintained at an acceptable level when the input data differs from the training data. This requirement would apply when data received during operation is drawn from a different distribution to the training data. The specification and verification of classifier generalization capability can be based on measures of the dissimilarity between operational and training data. A requirement could state the permitted forms of the relationship between classification performance and a data dissimilarity measure. We have previously proposed such a dissimilarity measure, which we have termed the Neuron Region Distance (NRD). The NRD is a function of network activation values. In this paper, we analyze neuron activation values layer-by-layer across a neural network. This is in order to advance our progress towards the conception of a novel, generalized form of the NRD. This new measure is called the Per Neuron Ranking (PNR) measure. The activation value analysis provides insight into the required formulation of the PNR measure.