{"title":"基于高斯分布的权重映射提高深度神经网络容错性的研究","authors":"Ruoxu Sun, Jinyu Zhan, Wei Jiang, Yucheng Jiang","doi":"10.1145/3477244.3478521","DOIUrl":null,"url":null,"abstract":"In this paper, we approach to improve the fault tolerance of Deep Neural Networks (DNNs) for safety-critical artificial intelligent applications. We propose to remap the range of 32-bit float to weights to reduce the influence of invalid weights caused by bit-flip faults. From preliminary experiments, we observe that weakening bit-flip faults which make positive weights larger can help to improve the reliability of DNNs. Then, we propose a gaussian distribution based mapping method to prevent weights from being influenced by bit-flip faults, in which a novel function is formulated to remap the relation between 32-bit float and the values of weights. Extensive experiments demonstrate that our approach can improve the accuracy of VGG16 from 13.5% to 80.5%, which is better than the other six tolerance approaches of DNNs.","PeriodicalId":354206,"journal":{"name":"Proceedings of the 2021 International Conference on Embedded Software","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving fault tolerance of DNNs through weight remapping based on gaussian distribution: work-in-progress\",\"authors\":\"Ruoxu Sun, Jinyu Zhan, Wei Jiang, Yucheng Jiang\",\"doi\":\"10.1145/3477244.3478521\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we approach to improve the fault tolerance of Deep Neural Networks (DNNs) for safety-critical artificial intelligent applications. We propose to remap the range of 32-bit float to weights to reduce the influence of invalid weights caused by bit-flip faults. From preliminary experiments, we observe that weakening bit-flip faults which make positive weights larger can help to improve the reliability of DNNs. Then, we propose a gaussian distribution based mapping method to prevent weights from being influenced by bit-flip faults, in which a novel function is formulated to remap the relation between 32-bit float and the values of weights. Extensive experiments demonstrate that our approach can improve the accuracy of VGG16 from 13.5% to 80.5%, which is better than the other six tolerance approaches of DNNs.\",\"PeriodicalId\":354206,\"journal\":{\"name\":\"Proceedings of the 2021 International Conference on Embedded Software\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 International Conference on Embedded Software\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3477244.3478521\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 International Conference on Embedded Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3477244.3478521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving fault tolerance of DNNs through weight remapping based on gaussian distribution: work-in-progress
In this paper, we approach to improve the fault tolerance of Deep Neural Networks (DNNs) for safety-critical artificial intelligent applications. We propose to remap the range of 32-bit float to weights to reduce the influence of invalid weights caused by bit-flip faults. From preliminary experiments, we observe that weakening bit-flip faults which make positive weights larger can help to improve the reliability of DNNs. Then, we propose a gaussian distribution based mapping method to prevent weights from being influenced by bit-flip faults, in which a novel function is formulated to remap the relation between 32-bit float and the values of weights. Extensive experiments demonstrate that our approach can improve the accuracy of VGG16 from 13.5% to 80.5%, which is better than the other six tolerance approaches of DNNs.