Mengyu Yang, Di Wu, Zelong Wang, Miao Hu, Yipeng Zhou
{"title":"理解和改进体积视频流的感知质量","authors":"Mengyu Yang, Di Wu, Zelong Wang, Miao Hu, Yipeng Zhou","doi":"10.1109/ICME55011.2023.00339","DOIUrl":null,"url":null,"abstract":"Volumetric video is fully three-dimensional and provides users with highly immersive and interactive experience. However, it is difficult to stream volumetric video over the Internet due to sheer video size and limited network bandwidth. Existing solutions suffered from poor perceptual quality and low coding efficiency. In this paper, we first conduct a comprehensive user study to understand the effectiveness of popular perceptual quality metrics for volumetric video. It is observed that those metrics cannot well capture the impact of user viewing behaviors. Considering the findings that users are more sensitive to the distortion of 2D image rendered from 3D point cloud, a new metric called Volu-FMAF is proposed to better represent perceptual quality of volumetric video. Next, we propose a novel neural-based volumetric video streaming framework RenderVolu and design a distortion-aware rendered image super-resolution network, called RenDA-Net, to further improve user perceptual quality. Last, we conduct extensive experiments with real datasets to validate our proposed method, and the results show that our method can boost the perceptual quality of volumetric video by 171% to 190%, and achieves a speedup of 108x in terms of decoding efficiency compared to the state-of-the-art approaches.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Understanding and Improving Perceptual Quality of Volumetric Video Streaming\",\"authors\":\"Mengyu Yang, Di Wu, Zelong Wang, Miao Hu, Yipeng Zhou\",\"doi\":\"10.1109/ICME55011.2023.00339\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Volumetric video is fully three-dimensional and provides users with highly immersive and interactive experience. However, it is difficult to stream volumetric video over the Internet due to sheer video size and limited network bandwidth. Existing solutions suffered from poor perceptual quality and low coding efficiency. In this paper, we first conduct a comprehensive user study to understand the effectiveness of popular perceptual quality metrics for volumetric video. It is observed that those metrics cannot well capture the impact of user viewing behaviors. Considering the findings that users are more sensitive to the distortion of 2D image rendered from 3D point cloud, a new metric called Volu-FMAF is proposed to better represent perceptual quality of volumetric video. Next, we propose a novel neural-based volumetric video streaming framework RenderVolu and design a distortion-aware rendered image super-resolution network, called RenDA-Net, to further improve user perceptual quality. Last, we conduct extensive experiments with real datasets to validate our proposed method, and the results show that our method can boost the perceptual quality of volumetric video by 171% to 190%, and achieves a speedup of 108x in terms of decoding efficiency compared to the state-of-the-art approaches.\",\"PeriodicalId\":321830,\"journal\":{\"name\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME55011.2023.00339\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00339","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Understanding and Improving Perceptual Quality of Volumetric Video Streaming
Volumetric video is fully three-dimensional and provides users with highly immersive and interactive experience. However, it is difficult to stream volumetric video over the Internet due to sheer video size and limited network bandwidth. Existing solutions suffered from poor perceptual quality and low coding efficiency. In this paper, we first conduct a comprehensive user study to understand the effectiveness of popular perceptual quality metrics for volumetric video. It is observed that those metrics cannot well capture the impact of user viewing behaviors. Considering the findings that users are more sensitive to the distortion of 2D image rendered from 3D point cloud, a new metric called Volu-FMAF is proposed to better represent perceptual quality of volumetric video. Next, we propose a novel neural-based volumetric video streaming framework RenderVolu and design a distortion-aware rendered image super-resolution network, called RenDA-Net, to further improve user perceptual quality. Last, we conduct extensive experiments with real datasets to validate our proposed method, and the results show that our method can boost the perceptual quality of volumetric video by 171% to 190%, and achieves a speedup of 108x in terms of decoding efficiency compared to the state-of-the-art approaches.