Matteo Francobaldi, A. D. Filippo, Andrea Borghesi, Nikola Pizurica, Igor Jovančević, Tim Llewellynn, Miguel de Prado
{"title":"TinderAI: Support System for Matching AI Algorithms and Embedded Devices","authors":"Matteo Francobaldi, A. D. Filippo, Andrea Borghesi, Nikola Pizurica, Igor Jovančević, Tim Llewellynn, Miguel de Prado","doi":"10.32473/flairs.36.133100","DOIUrl":null,"url":null,"abstract":"\n \n \nArtificial Intelligence (AI) is becoming increasingly important and pervasive in the modern world. The widespread adoption of AI algorithms is reflected in the extensive range of HW devices on which they can be deployed, from high-performance computing nodes to low-power embedded devices. Given the large set of heterogeneous resources where AI algorithms can be deployed, finding the most suitable device and its con- figuration is challenging, even for experts. \nWe propose a data-driven approach to assist AI adopters and developers in choosing the optimal HW resource. Our approach is based on three key elements: i) fair benchmarking of target AI algorithms on a set of hetero- geneous platforms, ii) creation of ML models to learn the behaviour of these AI algorithms, and iii) support guidelines to help identify the best deployment option for a given AI algorithm. We demonstrate our approach on a specific (and relevant) use case: Deep Neural Net- work (DNN) inference on embedded devices. \n \n \n","PeriodicalId":302103,"journal":{"name":"The International FLAIRS Conference Proceedings","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International FLAIRS Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32473/flairs.36.133100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence (AI) is becoming increasingly important and pervasive in the modern world. The widespread adoption of AI algorithms is reflected in the extensive range of HW devices on which they can be deployed, from high-performance computing nodes to low-power embedded devices. Given the large set of heterogeneous resources where AI algorithms can be deployed, finding the most suitable device and its con- figuration is challenging, even for experts.
We propose a data-driven approach to assist AI adopters and developers in choosing the optimal HW resource. Our approach is based on three key elements: i) fair benchmarking of target AI algorithms on a set of hetero- geneous platforms, ii) creation of ML models to learn the behaviour of these AI algorithms, and iii) support guidelines to help identify the best deployment option for a given AI algorithm. We demonstrate our approach on a specific (and relevant) use case: Deep Neural Net- work (DNN) inference on embedded devices.