{"title":"流水线边缘tpu上dnn的精确内存和通信感知调度","authors":"Jiaqi Yin, Zhiru Zhang, Cunxi Yu","doi":"10.1109/SEC54971.2022.00023","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) represent the state-of-the-art in many applications but have substantial computational and memory requirements, which greatly limit their training and deployment in real-world systems. In particular, the deployment challenges further increase on edge systems with much more restricted resource-constrained (e.g., computation and memory bounded), which recently attracted significant interest in many application scenarios. Such devices like Edge TPUs usually provide limited on-chip storage and memory bandwidth, where the heuristic-based ahead-of-time compilation techniques are highly limited in optimizing the inference performance due to the lacks of performance guarantees. This work proposes a novel exact pipeline scheduling framework that enables model parameter caching, data dependency, and device-to-device communication-aware multi-objective optimizations. The framework is powered by novel versatile SDC+ILP formulations supporting both propositional logic and non-equality constraints. The experimental results demonstrate that the proposed scheduling frameworks consistently outperform commercial Edge TPU Compiler with up to more than 4 x speedups on eleven ImageNet models in physical pipelined Edge TPU setups. In addition, we have demonstrated consistent real-world energy efficiency improvements measured with high precision power meter. Finally, the proposed framework has also demonstrated the capability in multi-model co-deployment on pipeline Edge TPU system, which is not supported by Edge TPU Compiler.","PeriodicalId":364062,"journal":{"name":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Exact Memory- and Communication-aware Scheduling of DNNs on Pipelined Edge TPUs\",\"authors\":\"Jiaqi Yin, Zhiru Zhang, Cunxi Yu\",\"doi\":\"10.1109/SEC54971.2022.00023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks (DNNs) represent the state-of-the-art in many applications but have substantial computational and memory requirements, which greatly limit their training and deployment in real-world systems. In particular, the deployment challenges further increase on edge systems with much more restricted resource-constrained (e.g., computation and memory bounded), which recently attracted significant interest in many application scenarios. Such devices like Edge TPUs usually provide limited on-chip storage and memory bandwidth, where the heuristic-based ahead-of-time compilation techniques are highly limited in optimizing the inference performance due to the lacks of performance guarantees. This work proposes a novel exact pipeline scheduling framework that enables model parameter caching, data dependency, and device-to-device communication-aware multi-objective optimizations. The framework is powered by novel versatile SDC+ILP formulations supporting both propositional logic and non-equality constraints. The experimental results demonstrate that the proposed scheduling frameworks consistently outperform commercial Edge TPU Compiler with up to more than 4 x speedups on eleven ImageNet models in physical pipelined Edge TPU setups. In addition, we have demonstrated consistent real-world energy efficiency improvements measured with high precision power meter. Finally, the proposed framework has also demonstrated the capability in multi-model co-deployment on pipeline Edge TPU system, which is not supported by Edge TPU Compiler.\",\"PeriodicalId\":364062,\"journal\":{\"name\":\"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEC54971.2022.00023\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC54971.2022.00023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exact Memory- and Communication-aware Scheduling of DNNs on Pipelined Edge TPUs
Deep neural networks (DNNs) represent the state-of-the-art in many applications but have substantial computational and memory requirements, which greatly limit their training and deployment in real-world systems. In particular, the deployment challenges further increase on edge systems with much more restricted resource-constrained (e.g., computation and memory bounded), which recently attracted significant interest in many application scenarios. Such devices like Edge TPUs usually provide limited on-chip storage and memory bandwidth, where the heuristic-based ahead-of-time compilation techniques are highly limited in optimizing the inference performance due to the lacks of performance guarantees. This work proposes a novel exact pipeline scheduling framework that enables model parameter caching, data dependency, and device-to-device communication-aware multi-objective optimizations. The framework is powered by novel versatile SDC+ILP formulations supporting both propositional logic and non-equality constraints. The experimental results demonstrate that the proposed scheduling frameworks consistently outperform commercial Edge TPU Compiler with up to more than 4 x speedups on eleven ImageNet models in physical pipelined Edge TPU setups. In addition, we have demonstrated consistent real-world energy efficiency improvements measured with high precision power meter. Finally, the proposed framework has also demonstrated the capability in multi-model co-deployment on pipeline Edge TPU system, which is not supported by Edge TPU Compiler.