{"title":"深度学习工作负载下GPU架构漏洞的理解","authors":"Danny Santoso, Hyeran Jeon","doi":"10.1109/DFT.2019.8875404","DOIUrl":null,"url":null,"abstract":"Deep learning has proved its effectiveness for various problems including object detection, speech recognition, stock price forecasting and so on. Among various accelerators, GPU is one of the most favorable platforms for deep learning that provides faster neuron processing with massive parallelism. Recently, there have been extensive studies for better performance and power consumption of deep learning computing. However, reliability of deep learning has not been thoroughly studied yet. Though there have been a few studies that evaluated reliability of GPU architectures for general-purpose applications, there have not been many studies that showed the architectural vulnerability (AVF) of core algorithms and optimization techniques of deep learning workloads. In this paper, we evaluate AVF of GPU architectures while running various deep learning workloads and provide in-depth analysis by comparing the AVF of deep learning workloads and the other GPU applications. We also provide the reliability impact of various optimization techniques of deep learning workloads.","PeriodicalId":415648,"journal":{"name":"2019 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Understanding of GPU Architectural Vulnerability for Deep Learning Workloads\",\"authors\":\"Danny Santoso, Hyeran Jeon\",\"doi\":\"10.1109/DFT.2019.8875404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning has proved its effectiveness for various problems including object detection, speech recognition, stock price forecasting and so on. Among various accelerators, GPU is one of the most favorable platforms for deep learning that provides faster neuron processing with massive parallelism. Recently, there have been extensive studies for better performance and power consumption of deep learning computing. However, reliability of deep learning has not been thoroughly studied yet. Though there have been a few studies that evaluated reliability of GPU architectures for general-purpose applications, there have not been many studies that showed the architectural vulnerability (AVF) of core algorithms and optimization techniques of deep learning workloads. In this paper, we evaluate AVF of GPU architectures while running various deep learning workloads and provide in-depth analysis by comparing the AVF of deep learning workloads and the other GPU applications. We also provide the reliability impact of various optimization techniques of deep learning workloads.\",\"PeriodicalId\":415648,\"journal\":{\"name\":\"2019 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DFT.2019.8875404\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DFT.2019.8875404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Understanding of GPU Architectural Vulnerability for Deep Learning Workloads
Deep learning has proved its effectiveness for various problems including object detection, speech recognition, stock price forecasting and so on. Among various accelerators, GPU is one of the most favorable platforms for deep learning that provides faster neuron processing with massive parallelism. Recently, there have been extensive studies for better performance and power consumption of deep learning computing. However, reliability of deep learning has not been thoroughly studied yet. Though there have been a few studies that evaluated reliability of GPU architectures for general-purpose applications, there have not been many studies that showed the architectural vulnerability (AVF) of core algorithms and optimization techniques of deep learning workloads. In this paper, we evaluate AVF of GPU architectures while running various deep learning workloads and provide in-depth analysis by comparing the AVF of deep learning workloads and the other GPU applications. We also provide the reliability impact of various optimization techniques of deep learning workloads.