C. Bolchini, A. Bosio, Luca Cassano, B. Deveautour, G. D. Natale, A. Miele, Ian O’Connor, E. Vatajelu
{"title":"机器学习替代计算范式的可靠性:炒作还是希望?","authors":"C. Bolchini, A. Bosio, Luca Cassano, B. Deveautour, G. D. Natale, A. Miele, Ian O’Connor, E. Vatajelu","doi":"10.1109/ddecs54261.2022.9770138","DOIUrl":null,"url":null,"abstract":"Today we observe amazing performance achieved by Machine Learning (ML); for specific tasks it even surpasses human capabilities. Unfortunately, nothing comes for free: the hidden cost behind ML performance stems from its high complexity in terms of operations to be computed and the involved amount of data. For this reasons, custom Artificial Intelligence hardware accelerators based on alternative computing paradigms are attracting large interest. Such dedicated devices support the energy-hungry data movement, speed of computation, and memory resources that MLs require to realize their full potential. However, when ML is deployed on safety-/mission-critical applications, dependability becomes a concern. This paper presents the state of the art of custom Artificial Intelligence hardware architectures for ML, here Spiking and Convolutional Neural Networks, and shows the best practices to evaluate their dependability.","PeriodicalId":334461,"journal":{"name":"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dependability of Alternative Computing Paradigms for Machine Learning: hype or hope?\",\"authors\":\"C. Bolchini, A. Bosio, Luca Cassano, B. Deveautour, G. D. Natale, A. Miele, Ian O’Connor, E. Vatajelu\",\"doi\":\"10.1109/ddecs54261.2022.9770138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Today we observe amazing performance achieved by Machine Learning (ML); for specific tasks it even surpasses human capabilities. Unfortunately, nothing comes for free: the hidden cost behind ML performance stems from its high complexity in terms of operations to be computed and the involved amount of data. For this reasons, custom Artificial Intelligence hardware accelerators based on alternative computing paradigms are attracting large interest. Such dedicated devices support the energy-hungry data movement, speed of computation, and memory resources that MLs require to realize their full potential. However, when ML is deployed on safety-/mission-critical applications, dependability becomes a concern. This paper presents the state of the art of custom Artificial Intelligence hardware architectures for ML, here Spiking and Convolutional Neural Networks, and shows the best practices to evaluate their dependability.\",\"PeriodicalId\":334461,\"journal\":{\"name\":\"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ddecs54261.2022.9770138\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ddecs54261.2022.9770138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dependability of Alternative Computing Paradigms for Machine Learning: hype or hope?
Today we observe amazing performance achieved by Machine Learning (ML); for specific tasks it even surpasses human capabilities. Unfortunately, nothing comes for free: the hidden cost behind ML performance stems from its high complexity in terms of operations to be computed and the involved amount of data. For this reasons, custom Artificial Intelligence hardware accelerators based on alternative computing paradigms are attracting large interest. Such dedicated devices support the energy-hungry data movement, speed of computation, and memory resources that MLs require to realize their full potential. However, when ML is deployed on safety-/mission-critical applications, dependability becomes a concern. This paper presents the state of the art of custom Artificial Intelligence hardware architectures for ML, here Spiking and Convolutional Neural Networks, and shows the best practices to evaluate their dependability.