Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan
{"title":"分布式多模态数据流的低延迟推测推断","authors":"Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan","doi":"10.1145/3568113.3568121","DOIUrl":null,"url":null,"abstract":"While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multimodal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting).","PeriodicalId":29918,"journal":{"name":"GetMobile-Mobile Computing & Communications Review","volume":"43 1","pages":"23 - 26"},"PeriodicalIF":0.7000,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Low-Latency Speculative Inference on Distributed Multi-Modal Data Streams\",\"authors\":\"Tianxing Li, Jin Huang, Erik Risinger, Deepak Ganesan\",\"doi\":\"10.1145/3568113.3568121\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multimodal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting).\",\"PeriodicalId\":29918,\"journal\":{\"name\":\"GetMobile-Mobile Computing & Communications Review\",\"volume\":\"43 1\",\"pages\":\"23 - 26\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2022-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GetMobile-Mobile Computing & Communications Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3568113.3568121\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GetMobile-Mobile Computing & Communications Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3568113.3568121","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Low-Latency Speculative Inference on Distributed Multi-Modal Data Streams
While multi-modal deep learning is useful in distributed sensing tasks like human tracking, activity recognition, and audio and video analysis, deploying state-of-the-art multi-modal models in a wirelessly networked sensor system poses unique challenges. The data sizes for different modalities can be highly asymmetric (e.g., video vs. audio), and these differences can lead to significant delays between streams in the presence of wireless dynamics. Therefore, a slow stream can significantly slow down a multimodal inference system in the cloud, leading to either increased latency (when blocked by the slow stream) or degradation in inference accuracy (if inference proceeds without waiting).