{"title":"基于ReRAM点积引擎的精确、高效、鲁棒的二元神经网络的硬件感知自动宽度搜索","authors":"Qidong Tang, Zhezhi He, Fangxin Liu, Zongwu Wang, Yiyuan Zhou, Yinghuan Zhang, Li Jiang","doi":"10.1109/ASP-DAC52403.2022.9712542","DOIUrl":null,"url":null,"abstract":"Binary Neural Networks (BNNs) have attracted tremendous attention in ReRAM-based Process-In-Memory (PIM) systems, since they significantly simplify the hardware-expensive peripheral circuits and memory footprint. Meanwhile, BNNs are proven to have superior bit error tolerance, which inspires us to make use of this capability in PIM systems whose memory bit-cell suffers from severe device defects. Nevertheless, prior works of BNN do not simultaneously meet the criterion that 1) achieving similar accuracy w.r.t its full-precision counterpart; 2) fully binarized without full-precision operation; and 3) rapid BNN construction, which hampers its real-world deployment. This work proposes the first framework called HAWIS, whose generated BNN can satisfy all the above criteria. The proposed framework utilizes the super-net pre-training technique and reinforcement-learning based width search for BNN generation. Our experimental results show that the BNN generated by HAWIS achieves 69.3% top-1 accuracy on ImageNet with ResNet-18. In terms of robustness, our method maximally increases the inference accuracy by 66.9% and 20% compared to 8-bit and baseline 1-bit counterparts under ReRAM non-ideal effects. Our-code is available at: https://github.com/DamonAtSjtu/HAWIS.","PeriodicalId":239260,"journal":{"name":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine\",\"authors\":\"Qidong Tang, Zhezhi He, Fangxin Liu, Zongwu Wang, Yiyuan Zhou, Yinghuan Zhang, Li Jiang\",\"doi\":\"10.1109/ASP-DAC52403.2022.9712542\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Binary Neural Networks (BNNs) have attracted tremendous attention in ReRAM-based Process-In-Memory (PIM) systems, since they significantly simplify the hardware-expensive peripheral circuits and memory footprint. Meanwhile, BNNs are proven to have superior bit error tolerance, which inspires us to make use of this capability in PIM systems whose memory bit-cell suffers from severe device defects. Nevertheless, prior works of BNN do not simultaneously meet the criterion that 1) achieving similar accuracy w.r.t its full-precision counterpart; 2) fully binarized without full-precision operation; and 3) rapid BNN construction, which hampers its real-world deployment. This work proposes the first framework called HAWIS, whose generated BNN can satisfy all the above criteria. The proposed framework utilizes the super-net pre-training technique and reinforcement-learning based width search for BNN generation. Our experimental results show that the BNN generated by HAWIS achieves 69.3% top-1 accuracy on ImageNet with ResNet-18. In terms of robustness, our method maximally increases the inference accuracy by 66.9% and 20% compared to 8-bit and baseline 1-bit counterparts under ReRAM non-ideal effects. Our-code is available at: https://github.com/DamonAtSjtu/HAWIS.\",\"PeriodicalId\":239260,\"journal\":{\"name\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASP-DAC52403.2022.9712542\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC52403.2022.9712542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine
Binary Neural Networks (BNNs) have attracted tremendous attention in ReRAM-based Process-In-Memory (PIM) systems, since they significantly simplify the hardware-expensive peripheral circuits and memory footprint. Meanwhile, BNNs are proven to have superior bit error tolerance, which inspires us to make use of this capability in PIM systems whose memory bit-cell suffers from severe device defects. Nevertheless, prior works of BNN do not simultaneously meet the criterion that 1) achieving similar accuracy w.r.t its full-precision counterpart; 2) fully binarized without full-precision operation; and 3) rapid BNN construction, which hampers its real-world deployment. This work proposes the first framework called HAWIS, whose generated BNN can satisfy all the above criteria. The proposed framework utilizes the super-net pre-training technique and reinforcement-learning based width search for BNN generation. Our experimental results show that the BNN generated by HAWIS achieves 69.3% top-1 accuracy on ImageNet with ResNet-18. In terms of robustness, our method maximally increases the inference accuracy by 66.9% and 20% compared to 8-bit and baseline 1-bit counterparts under ReRAM non-ideal effects. Our-code is available at: https://github.com/DamonAtSjtu/HAWIS.