Konstantin Lübeck, Alexander Louis-Ferdinand Jung, Felix Wedlich, O. Bringmann
{"title":"正在进行的工作:深度神经网络加速器的超快速而准确的性能预测","authors":"Konstantin Lübeck, Alexander Louis-Ferdinand Jung, Felix Wedlich, O. Bringmann","doi":"10.1109/CASES55004.2022.00020","DOIUrl":null,"url":null,"abstract":"We present an automatic methodology to accurately predict the performance of Deep Neural Network (DNN) accelerators using abstract descriptions of accelerator architectures and DNNs with a high degree of flexibility. By mapping partially unrolled neural network layers onto accelerator architectures, we automatically construct an analytical performance model, exploiting the dataflow-driven nature of DNNs that allows us to evaluate only a few loop iterations to determine the performance of a whole DNN layer.","PeriodicalId":331181,"journal":{"name":"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Work-in-Progress: Ultra-fast yet Accurate Performance Prediction for Deep Neural Network Accelerators\",\"authors\":\"Konstantin Lübeck, Alexander Louis-Ferdinand Jung, Felix Wedlich, O. Bringmann\",\"doi\":\"10.1109/CASES55004.2022.00020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present an automatic methodology to accurately predict the performance of Deep Neural Network (DNN) accelerators using abstract descriptions of accelerator architectures and DNNs with a high degree of flexibility. By mapping partially unrolled neural network layers onto accelerator architectures, we automatically construct an analytical performance model, exploiting the dataflow-driven nature of DNNs that allows us to evaluate only a few loop iterations to determine the performance of a whole DNN layer.\",\"PeriodicalId\":331181,\"journal\":{\"name\":\"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)\",\"volume\":\"96 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CASES55004.2022.00020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASES55004.2022.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Work-in-Progress: Ultra-fast yet Accurate Performance Prediction for Deep Neural Network Accelerators
We present an automatic methodology to accurately predict the performance of Deep Neural Network (DNN) accelerators using abstract descriptions of accelerator architectures and DNNs with a high degree of flexibility. By mapping partially unrolled neural network layers onto accelerator architectures, we automatically construct an analytical performance model, exploiting the dataflow-driven nature of DNNs that allows us to evaluate only a few loop iterations to determine the performance of a whole DNN layer.