{"title":"Detecting Functional Safety Violations in Online AI Accelerators","authors":"Shamik Kundu, K. Basu","doi":"10.1109/IOLTS56730.2022.9897702","DOIUrl":null,"url":null,"abstract":"With the ubiquitous deployment of Deep Neural Networks (DNNs) in low latency mission critical applications, there has been an extensive proliferation of custom-built AI inference accelerators at the edge. Drastic technology scaling in recent years has made these circuits highly vulnerable to faults due to various reasons like aging, latent defects, single event upsets, etc. Such faults are highly detrimental to the classification accuracy of the AI accelerator, leading to the critical Functional Safety (FuSa) violation, when used in mission-critical applications. In order to detect such violations in mission mode, we analyze the efficiency of a software-based self test scheme that employs functional test patterns, akin to instances in the application dataset. Such patterns are either selected from the dataset of the DNN, or generated from scratch utilizing the concept of Generative Adversarial Networks (GANs). When evaluated on state-of-the-art DNNs on multivariate exhaustive datasets, the GAN generated test patterns significantly improve FuSa violation detection coverage by up to 130.28%, compared to the selected test patterns, thereby accomplishing efficient testing of the AI accelerator, online, in mission mode.","PeriodicalId":274595,"journal":{"name":"2022 IEEE 28th International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 28th International Symposium on On-Line Testing and Robust System Design (IOLTS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IOLTS56730.2022.9897702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With the ubiquitous deployment of Deep Neural Networks (DNNs) in low latency mission critical applications, there has been an extensive proliferation of custom-built AI inference accelerators at the edge. Drastic technology scaling in recent years has made these circuits highly vulnerable to faults due to various reasons like aging, latent defects, single event upsets, etc. Such faults are highly detrimental to the classification accuracy of the AI accelerator, leading to the critical Functional Safety (FuSa) violation, when used in mission-critical applications. In order to detect such violations in mission mode, we analyze the efficiency of a software-based self test scheme that employs functional test patterns, akin to instances in the application dataset. Such patterns are either selected from the dataset of the DNN, or generated from scratch utilizing the concept of Generative Adversarial Networks (GANs). When evaluated on state-of-the-art DNNs on multivariate exhaustive datasets, the GAN generated test patterns significantly improve FuSa violation detection coverage by up to 130.28%, compared to the selected test patterns, thereby accomplishing efficient testing of the AI accelerator, online, in mission mode.