Shinsei Yoshikiyo, Naoko Misawa, K. Toprasertpong, Shinichi Takagi, C. Matsui, Ken Takeuchi
{"title":"Edge Retraining of FeFET LM-GA CiM for Write Variation & Reliability Error Compensation","authors":"Shinsei Yoshikiyo, Naoko Misawa, K. Toprasertpong, Shinichi Takagi, C. Matsui, Ken Takeuchi","doi":"10.1109/IMW52921.2022.9779255","DOIUrl":null,"url":null,"abstract":"This paper proposes an edge retraining method for local multiply and global accumulate (LM-GA) FeFET Computation-in-Memory (CiM) to compensate the accuracy degradation of neural network (NN) by FeFET device errors. The weights of the original NN model, accurately trained in cloud data center, are written into edge FeFET LM-GA CiM and changed by FeFET device errors in the field. By partially retraining the NN model at the edge device, the effect of device errors is reduced. The proposed method can retrain with small data according to the capacity of the edge device. Three types of FeFET errors, write variation, read disturbance, and data retention, are modeled based on actual device measurements for evaluation. From the evaluation, for the three types of FeFET errors, more than 50% of the reduced inference accuracy can be recovered. Furthermore, by adding a few more layers of retraining, the accuracy recovery rate increased by 20-30%. When the data used for retraining are reduced to 1%, the accuracy recovery rate decreases by about only 15%.","PeriodicalId":132074,"journal":{"name":"2022 IEEE International Memory Workshop (IMW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Memory Workshop (IMW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMW52921.2022.9779255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes an edge retraining method for local multiply and global accumulate (LM-GA) FeFET Computation-in-Memory (CiM) to compensate the accuracy degradation of neural network (NN) by FeFET device errors. The weights of the original NN model, accurately trained in cloud data center, are written into edge FeFET LM-GA CiM and changed by FeFET device errors in the field. By partially retraining the NN model at the edge device, the effect of device errors is reduced. The proposed method can retrain with small data according to the capacity of the edge device. Three types of FeFET errors, write variation, read disturbance, and data retention, are modeled based on actual device measurements for evaluation. From the evaluation, for the three types of FeFET errors, more than 50% of the reduced inference accuracy can be recovered. Furthermore, by adding a few more layers of retraining, the accuracy recovery rate increased by 20-30%. When the data used for retraining are reduced to 1%, the accuracy recovery rate decreases by about only 15%.