{"title":"Understanding Permanent Hardware Failures in Deep Learning Training Accelerator Systems","authors":"Yi He, Yanjing Li","doi":"10.1109/ETS56758.2023.10173972","DOIUrl":null,"url":null,"abstract":"Hardware failures pose critical threats to deep neural network (DNN) training workloads, and the urgency of tackling this challenge (known as the Silent Data Corruption challenge in a broader context) has been raised widely by the industry. Based on industry reports, a large number of the failures observed in real systems are permanent hardware failures in logic. However, there is a very limited understanding of the effects that these failures can impose on DNN training workloads. In this paper, we present the first resilience study on this subject, focusing on deep learning (DL) training accelerator systems. We developed a fault injection framework to accurately simulate the effects of permanent faults, and conducted 100K fault injection experiments. Our results provide the fundamental understanding on how logic permanent hardware failures affect training workloads and eventually generate unexpected training outcomes. Based on this new knowledge, we developed efficient software-based detection and recovery techniques to mitigate logic permanent hardware failures that are likely to generate unexpected outcomes. Evaluation on Google Cloud TPUs shows that our techniques are effective and practical: they require 15−25 lines of code change, and introduce 0.004%−0.025% performance/energy overhead for various representative neural network models.","PeriodicalId":211522,"journal":{"name":"2023 IEEE European Test Symposium (ETS)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE European Test Symposium (ETS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETS56758.2023.10173972","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Hardware failures pose critical threats to deep neural network (DNN) training workloads, and the urgency of tackling this challenge (known as the Silent Data Corruption challenge in a broader context) has been raised widely by the industry. Based on industry reports, a large number of the failures observed in real systems are permanent hardware failures in logic. However, there is a very limited understanding of the effects that these failures can impose on DNN training workloads. In this paper, we present the first resilience study on this subject, focusing on deep learning (DL) training accelerator systems. We developed a fault injection framework to accurately simulate the effects of permanent faults, and conducted 100K fault injection experiments. Our results provide the fundamental understanding on how logic permanent hardware failures affect training workloads and eventually generate unexpected training outcomes. Based on this new knowledge, we developed efficient software-based detection and recovery techniques to mitigate logic permanent hardware failures that are likely to generate unexpected outcomes. Evaluation on Google Cloud TPUs shows that our techniques are effective and practical: they require 15−25 lines of code change, and introduce 0.004%−0.025% performance/energy overhead for various representative neural network models.