Owl: A Pre-and Post-processing Framework for Video Analytics in Low-light Surroundings

Ruixiao Zhang, Chaoyang Li, Chen Wu, Tianchi Huang, Lifeng Sun
{"title":"Owl: A Pre-and Post-processing Framework for Video Analytics in Low-light Surroundings","authors":"Ruixiao Zhang, Chaoyang Li, Chen Wu, Tianchi Huang, Lifeng Sun","doi":"10.1109/INFOCOM53939.2023.10229059","DOIUrl":null,"url":null,"abstract":"The low-light environment is an integral surrounding in real-world video analytic applications. Conventional wisdom claims that in order to adapt to the extensive computation requirement of the analytics model and achieve high inference accuracy, the overall pipeline should leverage a client-to-cloud framework that designs a cloud-based inference with on-demand video streaming. However, we show that due to the amplified noise, directly streaming the video in low-light scenarios can introduce significant bandwidth inefficiency.In this paper, we propose Owl, an intelligent framework to optimize the bandwidth utilization and inference accuracy for the low-light video analytic pipeline. The core idea of Owl is two-fold: on the one hand, we will deploy a light-weighted pre-processing module before transmission, through which we will get the denoised video and significantly reduce the transmitted data; on the other hand, we recover the information from the denoised video via an enhancement module in the server-side. Specifically, through well-designed training mechanism and content representation technique, Owl can dynamically select the best configuration for time-varying videos. Experiments with a variety of datasets and tasks show that Owl achieves significant bandwidth benefits, while consistently optimizing the inference accuracy.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM53939.2023.10229059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The low-light environment is an integral surrounding in real-world video analytic applications. Conventional wisdom claims that in order to adapt to the extensive computation requirement of the analytics model and achieve high inference accuracy, the overall pipeline should leverage a client-to-cloud framework that designs a cloud-based inference with on-demand video streaming. However, we show that due to the amplified noise, directly streaming the video in low-light scenarios can introduce significant bandwidth inefficiency.In this paper, we propose Owl, an intelligent framework to optimize the bandwidth utilization and inference accuracy for the low-light video analytic pipeline. The core idea of Owl is two-fold: on the one hand, we will deploy a light-weighted pre-processing module before transmission, through which we will get the denoised video and significantly reduce the transmitted data; on the other hand, we recover the information from the denoised video via an enhancement module in the server-side. Specifically, through well-designed training mechanism and content representation technique, Owl can dynamically select the best configuration for time-varying videos. Experiments with a variety of datasets and tasks show that Owl achieves significant bandwidth benefits, while consistently optimizing the inference accuracy.
Owl:低光环境下视频分析的预处理和后处理框架
在现实世界的视频分析应用中,弱光环境是一个不可或缺的环境。传统观点认为,为了适应分析模型的广泛计算需求并实现高推理精度,整个管道应该利用客户端到云的框架,该框架设计了基于云的基于点播视频流的推理。然而,我们表明,由于放大的噪声,在低光情况下直接流式传输视频会导致显著的带宽效率低下。本文提出了Owl这一智能框架,用于优化弱光视频分析管道的带宽利用率和推理精度。Owl的核心思想有两个方面:一方面,我们在传输前部署轻量级的预处理模块,通过预处理模块得到去噪后的视频,显著减少传输的数据量;另一方面,我们通过服务器端的增强模块从去噪后的视频中恢复信息。具体来说,Owl可以通过精心设计的训练机制和内容表示技术,动态选择时变视频的最佳配置。各种数据集和任务的实验表明,Owl在不断优化推理精度的同时,获得了显著的带宽优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信