Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges最新文献

筛选
英文 中文
The case for admission control of mobile cameras into the live video analytics pipeline 允许控制移动摄像机进入实时视频分析管道的案例
Francescomaria Faticanti, F. Bronzino, F. Pellegrini
{"title":"The case for admission control of mobile cameras into the live video analytics pipeline","authors":"Francescomaria Faticanti, F. Bronzino, F. Pellegrini","doi":"10.1145/3477083.3480151","DOIUrl":"https://doi.org/10.1145/3477083.3480151","url":null,"abstract":"In this paper we consider the problem of orchestrating video analytics applications over an edge computing infrastructure. Video analytics applications have been traditionally associated to the processing of video streams generated by fixed video cameras. Nowadays, however, the availability of mobile video cameras has become pervasive. We argue that to take advantage of the presence of mobile video cameras---and their informative content---it may be necessary to refactor the edge orchestration logic. We propose a new solution that splits the problem into two connected actions: 1) Placement of processing functions in the infrastructure and 2) Admission of most informative cameras based on their field of view. We hence describe a possible scheme for joint video stream admission and orchestration. Finally, preliminary numerical results are presented, demonstrating that separating the two logic components can improve coverage while reducing the cost of deployment.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling high frame-rate UHD real-time communication with frame-skipping 通过跳帧实现高帧率UHD实时通信
Tingfeng Wang, Zili Meng, Mingwei Xu, Rui Han, Honghao Liu
{"title":"Enabling high frame-rate UHD real-time communication with frame-skipping","authors":"Tingfeng Wang, Zili Meng, Mingwei Xu, Rui Han, Honghao Liu","doi":"10.1145/3477083.3481582","DOIUrl":"https://doi.org/10.1145/3477083.3481582","url":null,"abstract":"With a high frame-rate and high bit-rate, ultra-high definition (UHD) real-time communication (RTC) users could sometimes suffer from severe service degradation. Due to the fluctuations of frames incoming and decoding at the client side, a decoder queue could be formulated before the streaming decoder at the client side. Those fluctuations could easily overload the decoder queue and introduce a noticeable delay for those queued frames. In this paper, we propose a Frame-Skipping mechanism to effectively reduce the queuing delay by actively managing the frames inside the decoder queue. We jointly optimize the frames with skipping to maintain the end-to-end delay while ensuring the decoding quality of video codec. We also mathematically quantify the potential performance with a Markovian chain. We evaluate the Frame-Skipping mechanism with our trace-driven simulation with real word UHD RTC traces. Our experiments demonstrate that Frame-Skipping can reduce the ratio of severe decoder queue delay by up to 23x and the ratio of severe total delay by up to 2.6x.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133352765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Characterizing real-time dense point cloud capture and streaming on mobile devices 描述移动设备上的实时密集点云捕获和流
Jinhan Hu, Aashiq Shaikh, A. Bahremand, R. Likamwa
{"title":"Characterizing real-time dense point cloud capture and streaming on mobile devices","authors":"Jinhan Hu, Aashiq Shaikh, A. Bahremand, R. Likamwa","doi":"10.1145/3477083.3480155","DOIUrl":"https://doi.org/10.1145/3477083.3480155","url":null,"abstract":"Point clouds are a dense compilation of millions of points that can advance content creation and interaction in various emerging applications such as Augmented Reality (AR). However, point clouds consist of per-point real-world spatial and color information that are too computationally intensive to meet real-time specifications, especially on mobile devices. To stream dense point cloud (PtCl) to mobile devices, existing solutions encode pre-captured point clouds, yet with PtCl capturing treated as a separate offline operation. To discover more insights, we combine PtCl capturing and streaming as an entire pipeline and build a research prototype to study the bottlenecks of its real-time usage on mobile devices, consisting of a depth sensor with high precision and resolution, an edge-computing development board, and a smartphone. In a custom Unity app, we monitor the latency of each operation from the capturing to the rendering, as well as the energy efficiency of the board and the smartphone working at different point cloud resolutions. Results reveal that a toolset helping users efficiently capture, stream, and process color and depth data is the key enabler to real-time PtCl capturing and streaming on mobile devices.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128315836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Cost effective processing of detection-driven video analytics at the edge 具有成本效益的边缘检测驱动视频分析处理
Md. Adnan Arefeen, M. Y. S. Uddin
{"title":"Cost effective processing of detection-driven video analytics at the edge","authors":"Md. Adnan Arefeen, M. Y. S. Uddin","doi":"10.1145/3477083.3480156","DOIUrl":"https://doi.org/10.1145/3477083.3480156","url":null,"abstract":"We demonstrate a real-time video analytics system for applications that use objection detection models on incoming frames as part of their computation pipeline. Through edge-cloud collaboration, we show how a reinforcement learning based agent can skip successive video frames while keeping the object detection results almost intact for end applications.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards memory-efficient inference in edge video analytics 在边缘视频分析中实现高效内存推理
Arthi Padmanabhan, A. Iyer, G. Ananthanarayanan, Yuanchao Shu, Nikolaos Karianakis, G. Xu, R. Netravali
{"title":"Towards memory-efficient inference in edge video analytics","authors":"Arthi Padmanabhan, A. Iyer, G. Ananthanarayanan, Yuanchao Shu, Nikolaos Karianakis, G. Xu, R. Netravali","doi":"10.1145/3477083.3480150","DOIUrl":"https://doi.org/10.1145/3477083.3480150","url":null,"abstract":"Video analytics pipelines incorporate on-premise edge servers to lower analysis latency, ensure privacy, and reduce bandwidth requirements. However, compared to the cloud, edge servers typically have lower processing power and GPU memory, limiting the number of video streams that they can manage and analyze. Existing solutions for memory management, such as swapping models in and out of GPU, having a common model stem, or compression and quantization to reduce the model size incur high overheads and often provide limited benefits. In this paper, we propose model merging as an approach towards memory management at the edge. This proposal is based on our observation that models at the edge share common layers, and that merging these common layers across models can result in significant memory savings. Our preliminary evaluation indicates that such an approach could result in up to 75% savings in the memory requirements. We conclude by discussing several challenges involved with realizing the model merging vision.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116975179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Decentralized modular architecture for live video analytics at the edge 分散的模块化架构,用于边缘的实时视频分析
Sri Pramodh Rachuri, F. Bronzino, Shubham Jain
{"title":"Decentralized modular architecture for live video analytics at the edge","authors":"Sri Pramodh Rachuri, F. Bronzino, Shubham Jain","doi":"10.1145/3477083.3480153","DOIUrl":"https://doi.org/10.1145/3477083.3480153","url":null,"abstract":"Live video analytics have become a key technology to support surveillance, security, traffic control, and even consumer multimedia applications in real time. The continuous growth in number of networked video cameras will further increase their widespread adoption. Yet, until now, developments in video analytics have largely focused on using fixed cameras, omitting the ever-growing presence of mobile cameras such as car dash-cams, drones, and smartphones. Edge computing, coupled with centralized clouds, has helped alleviate the network traffic and processing load, reducing latency and data transmissions. However, the current approach of processing video feeds through a hierarchy of clusters across a somewhat predictable path in the network will not be sufficient to support the integration of mobile feeds into the video analytics architecture. In this paper, we argue that a crucial step towards supporting heterogeneous camera sources is the adoption of a flat edge computing architecture. Such architecture should enable the dynamic distribution of processing loads through distributed computing points of presence, rapidly adapting to sudden changes in traffic conditions. In support of this hypothesis, we present exploratory results that show that smartly distributing and processing vision modules in parallel across available edge compute nodes can ultimately lead to better resource utilization and improved performance.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126540684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Auto-SDA: Automated video-based social distancing analyzer Auto-SDA:基于视频的自动社交距离分析仪
Mahshid Ghasemi, Z. Kostić, Javad Ghaderi, G. Zussman
{"title":"Auto-SDA: Automated video-based social distancing analyzer","authors":"Mahshid Ghasemi, Z. Kostić, Javad Ghaderi, G. Zussman","doi":"10.1145/3477083.3480154","DOIUrl":"https://doi.org/10.1145/3477083.3480154","url":null,"abstract":"Social distancing can reduce infection rates in respiratory pandemics such as COVID-19, especially in dense urban areas. To assess pedestrians' compliance with social distancing policies, we use the pilot site of the PAWR COSMOS wireless edge-cloud testbed in New York City to design and evaluate an Automated video-based Social Distancing Analyzer (Auto-SDA) pipeline. Auto-SDA derives pedestrians' trajectories and measures the duration of close proximity events. It relies on an object detector and a tracker, however, to achieve highly accurate social distancing analysis, we design and incorporate 3 modules into Auto-SDA: (i) a calibration module that converts 2D pixel distances to 3D on-ground distances with less than 10 cm error, (ii) a correction module that identifies pedestrians who were missed or assigned duplicate IDs by the object detection-tracker and rectifies their IDs, and (iii) a group detection module that identifies affiliated pedestrians (i.e., pedestrians who walk together as a social group) and excludes them from the social distancing violation analysis. We applied Auto-SDA to videos recorded at the COSMOS pilot site before the pandemic, soon after the lockdown, and after the vaccines became broadly available, and analyzed the impacts of the social distancing protocols on pedestrians' behaviors and their evolution. For example, the analysis shows that after the lockdown, less than 55% of the pedestrians violated the social distancing protocols, whereas this percentage increased to 65% after the vaccines became available. Moreover, after the lockdown, 0-20% of the pedestrians were affiliated with a social group, compared to 10-45% once the vaccines became available. Finally, following the lockdown, the density of the pedestrians at the intersection decreased by almost 50%.","PeriodicalId":206784,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Hot Topics in Video Analytics and Intelligent Edges","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127987045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信