Ramneet Kaur, Susmit Jha, Anirban Roy, O. Sokolsky, Insup Lee
{"title":"利用模型一致性预测深度神经网络的分布外性能","authors":"Ramneet Kaur, Susmit Jha, Anirban Roy, O. Sokolsky, Insup Lee","doi":"10.1109/ICAA58325.2023.00011","DOIUrl":null,"url":null,"abstract":"With the increasingly high interest in using Deep Neural Networks (DNN) in safety-critical cyber-physical systems, such as autonomous vehicles, providing assurance about the safe deployment of these models becomes ever more important. The safe deployment of deep learning models in the real world where the inputs can vary from the training environment of the models requires characterizing the performance and the uncertainty in the prediction of these models, particularly on novel and out-of-distribution (OOD) inputs. This has motivated the development of methods to predict the accuracy of DNN in novel (unseen during training) environments. These methods, however, assume access to some labeled data from the novel environment which is unrealistic in many real-world settings. We propose an approach for predicting the accuracy of a DNN classifier under a shift from its training distribution without assuming access to labels of the inputs drawn from the shifted distribution. We demonstrate the efficacy of the proposed approach on two autonomous driving datasets namely the GTSRB dataset for image classification, and the ONCE dataset with synchronized feeds from LiDAR and cameras used for object detection. We show that the proposed approach is applicable for predicting accuracy on different modalities (image from camera, and point cloud from LiDAR) of the input data.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Predicting Out-of-Distribution Performance of Deep Neural Networks Using Model Conformance\",\"authors\":\"Ramneet Kaur, Susmit Jha, Anirban Roy, O. Sokolsky, Insup Lee\",\"doi\":\"10.1109/ICAA58325.2023.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the increasingly high interest in using Deep Neural Networks (DNN) in safety-critical cyber-physical systems, such as autonomous vehicles, providing assurance about the safe deployment of these models becomes ever more important. The safe deployment of deep learning models in the real world where the inputs can vary from the training environment of the models requires characterizing the performance and the uncertainty in the prediction of these models, particularly on novel and out-of-distribution (OOD) inputs. This has motivated the development of methods to predict the accuracy of DNN in novel (unseen during training) environments. These methods, however, assume access to some labeled data from the novel environment which is unrealistic in many real-world settings. We propose an approach for predicting the accuracy of a DNN classifier under a shift from its training distribution without assuming access to labels of the inputs drawn from the shifted distribution. We demonstrate the efficacy of the proposed approach on two autonomous driving datasets namely the GTSRB dataset for image classification, and the ONCE dataset with synchronized feeds from LiDAR and cameras used for object detection. We show that the proposed approach is applicable for predicting accuracy on different modalities (image from camera, and point cloud from LiDAR) of the input data.\",\"PeriodicalId\":190198,\"journal\":{\"name\":\"2023 IEEE International Conference on Assured Autonomy (ICAA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Assured Autonomy (ICAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAA58325.2023.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Assured Autonomy (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA58325.2023.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Predicting Out-of-Distribution Performance of Deep Neural Networks Using Model Conformance
With the increasingly high interest in using Deep Neural Networks (DNN) in safety-critical cyber-physical systems, such as autonomous vehicles, providing assurance about the safe deployment of these models becomes ever more important. The safe deployment of deep learning models in the real world where the inputs can vary from the training environment of the models requires characterizing the performance and the uncertainty in the prediction of these models, particularly on novel and out-of-distribution (OOD) inputs. This has motivated the development of methods to predict the accuracy of DNN in novel (unseen during training) environments. These methods, however, assume access to some labeled data from the novel environment which is unrealistic in many real-world settings. We propose an approach for predicting the accuracy of a DNN classifier under a shift from its training distribution without assuming access to labels of the inputs drawn from the shifted distribution. We demonstrate the efficacy of the proposed approach on two autonomous driving datasets namely the GTSRB dataset for image classification, and the ONCE dataset with synchronized feeds from LiDAR and cameras used for object detection. We show that the proposed approach is applicable for predicting accuracy on different modalities (image from camera, and point cloud from LiDAR) of the input data.