S. Mochizuki, K. Imamura, K. Mori, Y. Matsuda, T. Matsumura
{"title":"Ultra-low-latency Video Coding Method for Autonomous Vehicles and Virtual Reality Devices","authors":"S. Mochizuki, K. Imamura, K. Mori, Y. Matsuda, T. Matsumura","doi":"10.1109/IOTAIS.2018.8600851","DOIUrl":null,"url":null,"abstract":"Applications such as autonomous driving and virtual reality (VR) require low-latency transfer of high definition (HD) video. The proposed ultra-low-latency video coding method, which adopts line-based processing, has 0.44μs latency at minimum for Full-HD video. With multiple line-based image-prediction methods, image-adaptive quantization, and optimized entropy coding, the proposed method achieves compression to 39.0% data size and image quality of 45.4dB. The proposed basic algorithm and the optional 1D-DCT mode achieve compression to 33% and 20%, respectively, without significant visual degradation. These results are comparable to those for H.264 Intra despite one-thousandth ultra-low-latency of the proposed method. With the proposed video coding, the autonomous vehicles and VR devices can transfer HD video using 20% of the bandwidth of the source video without significant latency or visual degradation.","PeriodicalId":302621,"journal":{"name":"2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IOTAIS.2018.8600851","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Applications such as autonomous driving and virtual reality (VR) require low-latency transfer of high definition (HD) video. The proposed ultra-low-latency video coding method, which adopts line-based processing, has 0.44μs latency at minimum for Full-HD video. With multiple line-based image-prediction methods, image-adaptive quantization, and optimized entropy coding, the proposed method achieves compression to 39.0% data size and image quality of 45.4dB. The proposed basic algorithm and the optional 1D-DCT mode achieve compression to 33% and 20%, respectively, without significant visual degradation. These results are comparable to those for H.264 Intra despite one-thousandth ultra-low-latency of the proposed method. With the proposed video coding, the autonomous vehicles and VR devices can transfer HD video using 20% of the bandwidth of the source video without significant latency or visual degradation.