Ultra-low-latency Video Coding Method for Autonomous Vehicles and Virtual Reality Devices

S. Mochizuki, K. Imamura, K. Mori, Y. Matsuda, T. Matsumura
{"title":"Ultra-low-latency Video Coding Method for Autonomous Vehicles and Virtual Reality Devices","authors":"S. Mochizuki, K. Imamura, K. Mori, Y. Matsuda, T. Matsumura","doi":"10.1109/IOTAIS.2018.8600851","DOIUrl":null,"url":null,"abstract":"Applications such as autonomous driving and virtual reality (VR) require low-latency transfer of high definition (HD) video. The proposed ultra-low-latency video coding method, which adopts line-based processing, has 0.44μs latency at minimum for Full-HD video. With multiple line-based image-prediction methods, image-adaptive quantization, and optimized entropy coding, the proposed method achieves compression to 39.0% data size and image quality of 45.4dB. The proposed basic algorithm and the optional 1D-DCT mode achieve compression to 33% and 20%, respectively, without significant visual degradation. These results are comparable to those for H.264 Intra despite one-thousandth ultra-low-latency of the proposed method. With the proposed video coding, the autonomous vehicles and VR devices can transfer HD video using 20% of the bandwidth of the source video without significant latency or visual degradation.","PeriodicalId":302621,"journal":{"name":"2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IOTAIS.2018.8600851","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Applications such as autonomous driving and virtual reality (VR) require low-latency transfer of high definition (HD) video. The proposed ultra-low-latency video coding method, which adopts line-based processing, has 0.44μs latency at minimum for Full-HD video. With multiple line-based image-prediction methods, image-adaptive quantization, and optimized entropy coding, the proposed method achieves compression to 39.0% data size and image quality of 45.4dB. The proposed basic algorithm and the optional 1D-DCT mode achieve compression to 33% and 20%, respectively, without significant visual degradation. These results are comparable to those for H.264 Intra despite one-thousandth ultra-low-latency of the proposed method. With the proposed video coding, the autonomous vehicles and VR devices can transfer HD video using 20% of the bandwidth of the source video without significant latency or visual degradation.
自动驾驶汽车和虚拟现实设备的超低延迟视频编码方法
自动驾驶和虚拟现实(VR)等应用需要低延迟的高清(HD)视频传输。本文提出的超低延迟视频编码方法,采用基于线的处理,对于全高清视频的延迟最小为0.44μs。该方法采用多种基于线的图像预测方法、图像自适应量化和优化熵编码,实现了39.0%的数据压缩和45.4dB的图像质量。提出的基本算法和可选的1D-DCT模式分别实现了33%和20%的压缩,没有明显的视觉退化。这些结果与H.264 Intra相当,尽管所提出的方法的超低延迟是千分之一。通过提出的视频编码,自动驾驶汽车和VR设备可以使用源视频带宽的20%传输高清视频,而不会出现明显的延迟或视觉退化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信