{"title":"HVASR:利用视口感知超级分辨率加强 360 度视频传输","authors":"","doi":"10.1016/j.ins.2024.121609","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, 360-degree videos have gained significant traction due to their capacity to provide immersive experiences. However, the adoption of 360-degree videos substantially escalates bandwidth demands, necessitating approximately four to ten times more bandwidth than traditional video formats do. This presents a considerable challenge in maintaining high-quality videos in environments characterized by limited bandwidth or unstable networks. A trend has emerged where client-side computational power and deep neural networks are employed to enhance video quality while mitigating bandwidth requirements within contemporary video delivery systems. These approaches segment a video into discrete chunks and apply super resolution (SR) models to each segment, streaming low-resolution (LR) chunks alongside their corresponding SR models to the client. Although these methods enhance both video quality and transmission efficiency for conventional videos, they impose greater computational resource demands when applied to 360-degree content, thereby constraining widespread implementation. This paper introduces an innovative method called HVASR for 360-degree videos that leverages viewport information for more precise segmentation and minimizes model training costs as well as bandwidth requirements. Additionally, HVASR incorporates a viewport-aware training strategy that is aimed at further enhancing performance while reducing computational expenses. The experimental results demonstrate that HVASR achieves an average utility increase ranging from 12.46% to 40.89% across various scenes.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HVASR: Enhancing 360-degree video delivery with viewport-aware super resolution\",\"authors\":\"\",\"doi\":\"10.1016/j.ins.2024.121609\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, 360-degree videos have gained significant traction due to their capacity to provide immersive experiences. However, the adoption of 360-degree videos substantially escalates bandwidth demands, necessitating approximately four to ten times more bandwidth than traditional video formats do. This presents a considerable challenge in maintaining high-quality videos in environments characterized by limited bandwidth or unstable networks. A trend has emerged where client-side computational power and deep neural networks are employed to enhance video quality while mitigating bandwidth requirements within contemporary video delivery systems. These approaches segment a video into discrete chunks and apply super resolution (SR) models to each segment, streaming low-resolution (LR) chunks alongside their corresponding SR models to the client. Although these methods enhance both video quality and transmission efficiency for conventional videos, they impose greater computational resource demands when applied to 360-degree content, thereby constraining widespread implementation. This paper introduces an innovative method called HVASR for 360-degree videos that leverages viewport information for more precise segmentation and minimizes model training costs as well as bandwidth requirements. Additionally, HVASR incorporates a viewport-aware training strategy that is aimed at further enhancing performance while reducing computational expenses. The experimental results demonstrate that HVASR achieves an average utility increase ranging from 12.46% to 40.89% across various scenes.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020025524015238\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025524015238","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
HVASR: Enhancing 360-degree video delivery with viewport-aware super resolution
In recent years, 360-degree videos have gained significant traction due to their capacity to provide immersive experiences. However, the adoption of 360-degree videos substantially escalates bandwidth demands, necessitating approximately four to ten times more bandwidth than traditional video formats do. This presents a considerable challenge in maintaining high-quality videos in environments characterized by limited bandwidth or unstable networks. A trend has emerged where client-side computational power and deep neural networks are employed to enhance video quality while mitigating bandwidth requirements within contemporary video delivery systems. These approaches segment a video into discrete chunks and apply super resolution (SR) models to each segment, streaming low-resolution (LR) chunks alongside their corresponding SR models to the client. Although these methods enhance both video quality and transmission efficiency for conventional videos, they impose greater computational resource demands when applied to 360-degree content, thereby constraining widespread implementation. This paper introduces an innovative method called HVASR for 360-degree videos that leverages viewport information for more precise segmentation and minimizes model training costs as well as bandwidth requirements. Additionally, HVASR incorporates a viewport-aware training strategy that is aimed at further enhancing performance while reducing computational expenses. The experimental results demonstrate that HVASR achieves an average utility increase ranging from 12.46% to 40.89% across various scenes.
期刊介绍:
Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions.
Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.