{"title":"面向嵌入式FPGA部署的微型神经网络搜索","authors":"Haiyan Qin, Yejun Zeng, Jinyu Bai, Wang Kang","doi":"10.1109/AICAS57966.2023.10168571","DOIUrl":null,"url":null,"abstract":"Embedded FPGAs have become increasingly popular as acceleration platforms for the deployment of edge-side artificial intelligence (AI) applications, due in part to their flexible and configurable heterogeneous architectures. However, the complex deployment process hinders the realization of AI democratization, particularly at the edge. In this paper, we propose a software-hardware co-design framework that enables simultaneous searching for neural network architectures and corresponding accelerator designs on embedded FPGAs. The proposed framework comprises a hardware-friendly neural architecture search space, a reconfigurable streaming-based accelerator architecture, and a model performance estimator. An evolutionary algorithm targeting multi-objective optimization is employed to identify the optimal neural architecture and corresponding accelerator design. We evaluate our framework on various datasets and demonstrate that, in a typical edge AI scenario, the searched network and accelerator can achieve up to a 2.9% accuracy improvement and up to a 21 speedup compared to manually designed networks based on× common accelerator designs when deployed on a widely used embedded FPGA (Xilinx XC7Z020).","PeriodicalId":296649,"journal":{"name":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Searching Tiny Neural Networks for Deployment on Embedded FPGA\",\"authors\":\"Haiyan Qin, Yejun Zeng, Jinyu Bai, Wang Kang\",\"doi\":\"10.1109/AICAS57966.2023.10168571\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Embedded FPGAs have become increasingly popular as acceleration platforms for the deployment of edge-side artificial intelligence (AI) applications, due in part to their flexible and configurable heterogeneous architectures. However, the complex deployment process hinders the realization of AI democratization, particularly at the edge. In this paper, we propose a software-hardware co-design framework that enables simultaneous searching for neural network architectures and corresponding accelerator designs on embedded FPGAs. The proposed framework comprises a hardware-friendly neural architecture search space, a reconfigurable streaming-based accelerator architecture, and a model performance estimator. An evolutionary algorithm targeting multi-objective optimization is employed to identify the optimal neural architecture and corresponding accelerator design. We evaluate our framework on various datasets and demonstrate that, in a typical edge AI scenario, the searched network and accelerator can achieve up to a 2.9% accuracy improvement and up to a 21 speedup compared to manually designed networks based on× common accelerator designs when deployed on a widely used embedded FPGA (Xilinx XC7Z020).\",\"PeriodicalId\":296649,\"journal\":{\"name\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS57966.2023.10168571\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS57966.2023.10168571","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Searching Tiny Neural Networks for Deployment on Embedded FPGA
Embedded FPGAs have become increasingly popular as acceleration platforms for the deployment of edge-side artificial intelligence (AI) applications, due in part to their flexible and configurable heterogeneous architectures. However, the complex deployment process hinders the realization of AI democratization, particularly at the edge. In this paper, we propose a software-hardware co-design framework that enables simultaneous searching for neural network architectures and corresponding accelerator designs on embedded FPGAs. The proposed framework comprises a hardware-friendly neural architecture search space, a reconfigurable streaming-based accelerator architecture, and a model performance estimator. An evolutionary algorithm targeting multi-objective optimization is employed to identify the optimal neural architecture and corresponding accelerator design. We evaluate our framework on various datasets and demonstrate that, in a typical edge AI scenario, the searched network and accelerator can achieve up to a 2.9% accuracy improvement and up to a 21 speedup compared to manually designed networks based on× common accelerator designs when deployed on a widely used embedded FPGA (Xilinx XC7Z020).