Michael Beyer, A. Morozov, K. Ding, Sheng Ding, K. Janschek
{"title":"随机硬件故障对安全关键人工智能应用影响的量化:基于cnn的交通标志识别案例研究","authors":"Michael Beyer, A. Morozov, K. Ding, Sheng Ding, K. Janschek","doi":"10.1109/ISSREW.2019.00058","DOIUrl":null,"url":null,"abstract":"Nowadays, Artificial Intelligence (AI) rapidly enters almost every safety-critical domain, including the automotive industry. The next generation of functional safety standards has to define appropriate verification and validation techniques and propose adequate fault tolerance mechanisms. Several AI frameworks, such as TensorFlow by Google, have already proven to be effective and reliable platforms. However, similar to any other software, AI-based applications are prone to common random hardware faults, e.g., bit-flips which may occur in RAM or CPU registers and might lead to silent data corruption. Therefore, it is crucial to understand how different hardware faults affect the accuracy of AI applications. This paper introduces our new fault injection framework for TensorFlow and results of first experiments conducted on a Convolutional Neural Network (CNN) based traffic sign classifier. These results demonstrate the feasibility of the fault injection framework. In particular, they help to identify the most critical parts of a neural network under test.","PeriodicalId":166239,"journal":{"name":"2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Quantification of the Impact of Random Hardware Faults on Safety-Critical AI Applications: CNN-Based Traffic Sign Recognition Case Study\",\"authors\":\"Michael Beyer, A. Morozov, K. Ding, Sheng Ding, K. Janschek\",\"doi\":\"10.1109/ISSREW.2019.00058\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, Artificial Intelligence (AI) rapidly enters almost every safety-critical domain, including the automotive industry. The next generation of functional safety standards has to define appropriate verification and validation techniques and propose adequate fault tolerance mechanisms. Several AI frameworks, such as TensorFlow by Google, have already proven to be effective and reliable platforms. However, similar to any other software, AI-based applications are prone to common random hardware faults, e.g., bit-flips which may occur in RAM or CPU registers and might lead to silent data corruption. Therefore, it is crucial to understand how different hardware faults affect the accuracy of AI applications. This paper introduces our new fault injection framework for TensorFlow and results of first experiments conducted on a Convolutional Neural Network (CNN) based traffic sign classifier. These results demonstrate the feasibility of the fault injection framework. In particular, they help to identify the most critical parts of a neural network under test.\",\"PeriodicalId\":166239,\"journal\":{\"name\":\"2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSREW.2019.00058\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW.2019.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Quantification of the Impact of Random Hardware Faults on Safety-Critical AI Applications: CNN-Based Traffic Sign Recognition Case Study
Nowadays, Artificial Intelligence (AI) rapidly enters almost every safety-critical domain, including the automotive industry. The next generation of functional safety standards has to define appropriate verification and validation techniques and propose adequate fault tolerance mechanisms. Several AI frameworks, such as TensorFlow by Google, have already proven to be effective and reliable platforms. However, similar to any other software, AI-based applications are prone to common random hardware faults, e.g., bit-flips which may occur in RAM or CPU registers and might lead to silent data corruption. Therefore, it is crucial to understand how different hardware faults affect the accuracy of AI applications. This paper introduces our new fault injection framework for TensorFlow and results of first experiments conducted on a Convolutional Neural Network (CNN) based traffic sign classifier. These results demonstrate the feasibility of the fault injection framework. In particular, they help to identify the most critical parts of a neural network under test.