{"title":"PIB: Prioritized Information Bottleneck Framework for Collaborative Edge Video Analytics","authors":"Zhengru Fang, Senkang Hu, Liyan Yang, Yiqin Deng, Xianhao Chen, Yuguang Fang","doi":"arxiv-2408.17047","DOIUrl":null,"url":null,"abstract":"Collaborative edge sensing systems, particularly in collaborative perception\nsystems in autonomous driving, can significantly enhance tracking accuracy and\nreduce blind spots with multi-view sensing capabilities. However, their limited\nchannel capacity and the redundancy in sensory data pose significant\nchallenges, affecting the performance of collaborative inference tasks. To\ntackle these issues, we introduce a Prioritized Information Bottleneck (PIB)\nframework for collaborative edge video analytics. We first propose a\npriority-based inference mechanism that jointly considers the signal-to-noise\nratio (SNR) and the camera's coverage area of the region of interest (RoI). To\nenable efficient inference, PIB reduces video redundancy in both spatial and\ntemporal domains and transmits only the essential information for the\ndownstream inference tasks. This eliminates the need to reconstruct videos on\nthe edge server while maintaining low latency. Specifically, it derives\ncompact, task-relevant features by employing the deterministic information\nbottleneck (IB) method, which strikes a balance between feature informativeness\nand communication costs. Given the computational challenges caused by IB-based\nobjectives with high-dimensional data, we resort to variational approximations\nfor feasible optimization. Compared to TOCOM-TEM, JPEG, and HEVC, PIB achieves\nan improvement of up to 15.1\\% in mean object detection accuracy (MODA) and\nreduces communication costs by 66.7% when edge cameras experience poor channel\nconditions.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Networking and Internet Architecture","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.17047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Collaborative edge sensing systems, particularly in collaborative perception
systems in autonomous driving, can significantly enhance tracking accuracy and
reduce blind spots with multi-view sensing capabilities. However, their limited
channel capacity and the redundancy in sensory data pose significant
challenges, affecting the performance of collaborative inference tasks. To
tackle these issues, we introduce a Prioritized Information Bottleneck (PIB)
framework for collaborative edge video analytics. We first propose a
priority-based inference mechanism that jointly considers the signal-to-noise
ratio (SNR) and the camera's coverage area of the region of interest (RoI). To
enable efficient inference, PIB reduces video redundancy in both spatial and
temporal domains and transmits only the essential information for the
downstream inference tasks. This eliminates the need to reconstruct videos on
the edge server while maintaining low latency. Specifically, it derives
compact, task-relevant features by employing the deterministic information
bottleneck (IB) method, which strikes a balance between feature informativeness
and communication costs. Given the computational challenges caused by IB-based
objectives with high-dimensional data, we resort to variational approximations
for feasible optimization. Compared to TOCOM-TEM, JPEG, and HEVC, PIB achieves
an improvement of up to 15.1\% in mean object detection accuracy (MODA) and
reduces communication costs by 66.7% when edge cameras experience poor channel
conditions.