A Feature-based Video Transmission Framework for Visual IoT in Fog Computing Systems

Yuqin Wang, Jingce Xu, Wen Ji
{"title":"A Feature-based Video Transmission Framework for Visual IoT in Fog Computing Systems","authors":"Yuqin Wang, Jingce Xu, Wen Ji","doi":"10.1109/ANCS.2019.8901872","DOIUrl":null,"url":null,"abstract":"The rapid development of the internet of things (IoT)promotes research in smart city and Fog computing. The vast volume of real-time visual data produced from the tremendous end devices in IoT is a big challenge for the network to transmit and for the data center to store. The typical case is the huge volume of visual data produced by the surveillance cameras in a smart city. In this paper, we consider the problem of how to allocate the calculation ability of the Fog node to handle the surveillance data to obtain low delay meanwhile maintain the video quality. To solve this challenge, we attempt to reduce the tremendous video data using deep learning models in the computational Fog node and optimize the transmission function for high efficiency. To reduce data, we extract video feature and keep salient zones with high resolution meanwhile leave the unavoidable distortion in less important areas. To obtain the least transmission delay under the dynamic bandwidth in Fog computing, we model the transmission delay function and solve it by Lagrangian dual decomposition. We make experiments on public dataset Cityscapes and 4G/LTE Bandwidth Log to evaluate our method. The experiment results show that our feature-based image processing method obtains around 68.7% higher average SSIM (structural similarity index)than the traditional HEVC in the salient zones, and our solution reduces the system delay by 71.02 % comparing with the plain transmission method. It proves our solution reduces the video transmission latency meanwhile keeps the SSIM of salient areas in the video.","PeriodicalId":405320,"journal":{"name":"2019 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ANCS.2019.8901872","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The rapid development of the internet of things (IoT)promotes research in smart city and Fog computing. The vast volume of real-time visual data produced from the tremendous end devices in IoT is a big challenge for the network to transmit and for the data center to store. The typical case is the huge volume of visual data produced by the surveillance cameras in a smart city. In this paper, we consider the problem of how to allocate the calculation ability of the Fog node to handle the surveillance data to obtain low delay meanwhile maintain the video quality. To solve this challenge, we attempt to reduce the tremendous video data using deep learning models in the computational Fog node and optimize the transmission function for high efficiency. To reduce data, we extract video feature and keep salient zones with high resolution meanwhile leave the unavoidable distortion in less important areas. To obtain the least transmission delay under the dynamic bandwidth in Fog computing, we model the transmission delay function and solve it by Lagrangian dual decomposition. We make experiments on public dataset Cityscapes and 4G/LTE Bandwidth Log to evaluate our method. The experiment results show that our feature-based image processing method obtains around 68.7% higher average SSIM (structural similarity index)than the traditional HEVC in the salient zones, and our solution reduces the system delay by 71.02 % comparing with the plain transmission method. It proves our solution reduces the video transmission latency meanwhile keeps the SSIM of salient areas in the video.
基于特征的雾计算视觉物联网视频传输框架
物联网的快速发展促进了智慧城市和雾计算的研究。物联网中大量终端设备产生的大量实时可视化数据对网络传输和数据中心存储都是一个巨大的挑战。典型的例子就是智慧城市中监控摄像头产生的海量视觉数据。在本文中,我们考虑了如何分配雾节点的计算能力来处理监控数据,以获得低延迟同时保持视频质量的问题。为了解决这一挑战,我们尝试在计算雾节点中使用深度学习模型来减少巨大的视频数据,并优化传输函数以提高效率。为了减少数据量,我们提取视频特征,保留高分辨率的突出区域,同时在不重要的区域留下不可避免的失真。为了在雾计算中获得动态带宽下最小的传输延迟,我们建立了传输延迟函数模型,并利用拉格朗日对偶分解进行求解。我们在公共数据集cityscape和4G/LTE带宽日志上进行了实验来评估我们的方法。实验结果表明,基于特征的图像处理方法在显著区域的平均SSIM(结构相似指数)比传统的HEVC方法提高了68.7%左右,与普通传输方法相比,我们的解决方案减少了71.02%的系统延迟。实验证明,该方案在降低视频传输延迟的同时,保持了视频中显著区域的SSIM。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信