{"title":"A Light-Weight Compressed Video Processing Method on Embedded Platforms for IIoT","authors":"Lvcheng Chen, Pingyang Liu, Li Zhang","doi":"10.1109/ICITES53477.2021.9637075","DOIUrl":null,"url":null,"abstract":"Recently, video has become an important medium for knowledge sharing for both industrial and consumer scenarios. For industrial applications, especially Industrial IoT (IIoT), it is highly desired to transfer the video content with limited bandwidth and process the video using constrained resources, which makes compressed video processing a very challenging problem. Recently, there have been extensive works focusing on compressed video quality enhancement (VQE) tasks, many of which deploy dedicated and complex CNNs to reach amazing performances. Such advancements have enabled various applications in video-based tasks. On the other hand, since deep neural networks often require high computational resources, such complex CNNs can hardly be deployed on the embedded devices. Thus, model pruning technique and inference optimization have been appealing options for efficient deployment of VQE under resource-constrained environments. In this paper, we incorporate a novel deformable convolution method into our network architecture and propose a light-weight method for compressed video quality enhancement on an embedded platform for IIoT. The proposed system has outperformed several SOTA light-weight quality enhancement models and can achieve 15.230 FPS and 0.773 FPS/W on MFQEv2 dataset [1].","PeriodicalId":370828,"journal":{"name":"2021 International Conference on Intelligent Technology and Embedded Systems (ICITES)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Intelligent Technology and Embedded Systems (ICITES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICITES53477.2021.9637075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, video has become an important medium for knowledge sharing for both industrial and consumer scenarios. For industrial applications, especially Industrial IoT (IIoT), it is highly desired to transfer the video content with limited bandwidth and process the video using constrained resources, which makes compressed video processing a very challenging problem. Recently, there have been extensive works focusing on compressed video quality enhancement (VQE) tasks, many of which deploy dedicated and complex CNNs to reach amazing performances. Such advancements have enabled various applications in video-based tasks. On the other hand, since deep neural networks often require high computational resources, such complex CNNs can hardly be deployed on the embedded devices. Thus, model pruning technique and inference optimization have been appealing options for efficient deployment of VQE under resource-constrained environments. In this paper, we incorporate a novel deformable convolution method into our network architecture and propose a light-weight method for compressed video quality enhancement on an embedded platform for IIoT. The proposed system has outperformed several SOTA light-weight quality enhancement models and can achieve 15.230 FPS and 0.773 FPS/W on MFQEv2 dataset [1].