T. Si Salem, Gabriele Castellano, G. Neglia, Fabio Pianese, Andrea Araldo
{"title":"迈向推理交付网络:具有最优性保证的分布式机器学习","authors":"T. Si Salem, Gabriele Castellano, G. Neglia, Fabio Pianese, Andrea Araldo","doi":"10.1109/MedComNet52149.2021.9501272","DOIUrl":null,"url":null,"abstract":"We present the novel idea of inference delivery networks (IDN), networks of computing nodes that coordinate to satisfy inference requests achieving the best trade-off between latency and accuracy. IDNs bridge the dichotomy between device and cloud execution by integrating inference delivery at the various tiers of the infrastructure continuum (access, edge, regional data center, cloud). We propose a distributed dynamic policy for ML model allocation in an IDN by which each node periodically updates its local set of inference models based on requests observed during the recent past plus limited information exchange with its neighbor nodes. Our policy offers strong performance guarantees in an adversarial setting and shows improvements over greedy heuristics with similar complexity in realistic scenarios.","PeriodicalId":272937,"journal":{"name":"2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Towards Inference Delivery Networks: Distributing Machine Learning with Optimality Guarantees\",\"authors\":\"T. Si Salem, Gabriele Castellano, G. Neglia, Fabio Pianese, Andrea Araldo\",\"doi\":\"10.1109/MedComNet52149.2021.9501272\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present the novel idea of inference delivery networks (IDN), networks of computing nodes that coordinate to satisfy inference requests achieving the best trade-off between latency and accuracy. IDNs bridge the dichotomy between device and cloud execution by integrating inference delivery at the various tiers of the infrastructure continuum (access, edge, regional data center, cloud). We propose a distributed dynamic policy for ML model allocation in an IDN by which each node periodically updates its local set of inference models based on requests observed during the recent past plus limited information exchange with its neighbor nodes. Our policy offers strong performance guarantees in an adversarial setting and shows improvements over greedy heuristics with similar complexity in realistic scenarios.\",\"PeriodicalId\":272937,\"journal\":{\"name\":\"2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MedComNet52149.2021.9501272\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MedComNet52149.2021.9501272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Inference Delivery Networks: Distributing Machine Learning with Optimality Guarantees
We present the novel idea of inference delivery networks (IDN), networks of computing nodes that coordinate to satisfy inference requests achieving the best trade-off between latency and accuracy. IDNs bridge the dichotomy between device and cloud execution by integrating inference delivery at the various tiers of the infrastructure continuum (access, edge, regional data center, cloud). We propose a distributed dynamic policy for ML model allocation in an IDN by which each node periodically updates its local set of inference models based on requests observed during the recent past plus limited information exchange with its neighbor nodes. Our policy offers strong performance guarantees in an adversarial setting and shows improvements over greedy heuristics with similar complexity in realistic scenarios.