{"title":"QoE and Immersive Media: A New Challenge","authors":"F. Battisti","doi":"10.1145/3423328.3423503","DOIUrl":"https://doi.org/10.1145/3423328.3423503","url":null,"abstract":"In the past 20 years the \"Quality of Experience\" (QoE) has increasingly become a necessary aspect to be taken into account by the multimedia industry. In fact, the QoE has allowed to extend the concept of image quality by considering other aspects to evaluate \"the delight or annoyance of a customer's experiences with a service\" [1]. The concept of QoE is extremely important since it has a big impact on several fields, ranging from the acquisition to the rendering of multimedia contents. Many works can be found in literature that address the problem of evaluating the QoE for 2D and 3D contents but nowadays we are facing a new challenging task: to study the QoE for immersive media. When talking about immersive media we are referring to different types of multimedia that allow the users to explore the content in their personal way thus introducing a new and important variable to be considered while evaluating the QoE. In particular, recent years have witnessed the spread of Virtual and Augmented Reality that allow the users to be involved in a more realistic and deep way in the artificial multimedia content or to interact with virtual objects [2]. In this new scenario we are moving to the concept of \"personal\" QoE that encompasses many open questions that are still unanswered such as: which is the impact of the rendering system on the QoE, how important are the viewing conditions, how can we account for the immersive media content way of exploration, can the study of multimedia saliency help in understanding the QoE [3]? In this talk we will address these open questions to explore what are the current research findings and trends and I will give you an insight of what will come next.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121907739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suiyi Ling, Yoann Baveye, Deepthi Nandakumar, S. Sethuraman, P. Le Callet
{"title":"Towards Better Quality Assessment of High-Quality Videos","authors":"Suiyi Ling, Yoann Baveye, Deepthi Nandakumar, S. Sethuraman, P. Le Callet","doi":"10.1145/3423328.3423496","DOIUrl":"https://doi.org/10.1145/3423328.3423496","url":null,"abstract":"In recent times, video content encoded at High-Definition (HD) and Ultra-High-Definition (UHD) resolution dominates internet traffic. The significantly increased data rate and growing expectations of video quality from users create great challenges in video compression and quality assessment, especially for higher-resolution, higher-quality content. The development of robust video quality assessment metrics relies on the collection of subjective ground truths. As high-quality video content is more ambiguous and difficult for a human observer to rate, a more distinguishable subjective protocol/methodology should be considered. In this study, towards better quality assessment of high-quality videos, a subjective study was conducted focusing on high-quality HD and UHD content with the Degradation Category Rating (DCR) protocol. Commonly used video quality metrics were benchmarked in two quality ranges.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114887777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do we Really Need No-reference Video Quality Metrics?","authors":"I. Katsavounidis","doi":"10.1145/3423328.3423502","DOIUrl":"https://doi.org/10.1145/3423328.3423502","url":null,"abstract":"Objective video quality metrics are an essential part of modern video processing pipelines, guiding video encoding decisions and encoding recipes, helping adaptive bitrate streaming algorithms make smart decisions and providing system-level monitoring capabilities. We will offer a breakdown of an end-to-end such pipeline, highlighting which types of video quality metrics are deployed in each system component and then focus on the single aspect that makes social videos so much different - and one can argue more difficult - to process: their wildly varying and typically inferior source quality. We will then discuss how no-reference video quality metrics have been typically used to measure user-generated video content quality with limited success and make a case for how the video industry can unite and solve this problem at its root.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132456226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Measurements on a Cloud VR Gaming Platform","authors":"Yen-Chun Li, Chia-Hsin Hsu, Yu-Chun Lin, Cheng-Hsin Hsu","doi":"10.1145/3423328.3423497","DOIUrl":"https://doi.org/10.1145/3423328.3423497","url":null,"abstract":"As cloud gaming and Virtual Reality (VR) games become popular in the game industry, game developers engage in these fields to boost their sales. Because cloud gaming possesses the merit of lifting computation loads from client devices to servers, it solves the high resource consumption issue of VR games on regular clients. However, it is important to know where is the bottleneck of the cloud VR gaming platform and how can it be improved in the future. In this paper, we conduct extensive experiments on the state-of-the-art cloud VR gaming platform--Air Light VR (ALVR). In particular, we analyze the performance of ALVR using both Quality-of-Service and Quality-of-Experience metrics. Our experiments reveal that latency (up to 90 ms RTT) has less influence on user experience compared to bandwidth limitation (as small as 35 Mbps) and packet loss rate (as high as 8%) . Moreover, we find that VR gamers can hardly notice the difference between the gaming experience with different latency values (between 0 and 90 ms RTT). Such findings shed some lights on how to further improve the cloud VR gaming platform, e.g., a budget of up to 90 ms RTT may be used to absorb network dynamics when bandwidth is insufficient.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128354876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suiyi Ling, Junle Wang, Wenming Huang, Yundi Guo, Like Zhang, Yanqing Jing, P. Le Callet
{"title":"A Subjective Study of Multi-Dimensional Aesthetic Assessment for Mobile Game Image","authors":"Suiyi Ling, Junle Wang, Wenming Huang, Yundi Guo, Like Zhang, Yanqing Jing, P. Le Callet","doi":"10.1145/3423328.3423501","DOIUrl":"https://doi.org/10.1145/3423328.3423501","url":null,"abstract":"Nowadays, mobile gaming has become one of the most rapidly developing fields boosted by fast-evolving techniques, which also gradually becomes one of the biggest parts of modern digital entertainment. With the exponential growth of users, published mobile games, and higher expectations for gaming experiences, multi-dimensional aesthetic assessment is essential in providing guidance for graphic/game developers, quality control of the overall gaming system, and achieving a better trade-off between gaming image quality and the rendering complexity (limited by the device performance). So far, most of the relative researches have been limited to only one dimension evaluation e.g., quality assessment considering streaming artifacts, which neglects other important aesthetic-related perspectives. In this paper, a comprehensive subjective study is presented considering multi-dimensional aesthetic factors (i.e., the fineness, color harmony, colorfulness, and overall quality) of mobile gaming images. Throughout extensive conducted experiments on the collected large-scale dataset, we discuss the relationships between different dimensions, and benchmark different image metrics designed for various purposes.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126480147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving the Efficiency of QoE Crowdtesting","authors":"Ricky K. P. Mok, Ginga Kawaguti, J. Okamoto","doi":"10.1145/3423328.3423499","DOIUrl":"https://doi.org/10.1145/3423328.3423499","url":null,"abstract":"Crowdsourced testing is an increasingly popular way to study the quality of experience (QoE) of applications, such as video streaming and web. The diverse nature of the crowd provides a more realistic assessment environment than laboratory-based assessments allow. Because of the short life-span of crowdsourcing tasks, each subject spends a significant fraction of the experiment time just learning how it works. We propose a novel experiment design to conduct a longitudinal crowdsourcing study aimed at improving the efficiency of crowdsourced QoE assessments. On Amazon Mechanical Turk, we found that our design was 20% more cost-effective than crowdsourcing multiple one-off short experiments. Our results showed that subjects had a high level of revisit intent and continuously participated in our experiments. We replicated the video streaming QoE assessments in a traditional laboratory setting. Our study showed similar trends in the relationship between video bitrate and QoE, which confirm findings in prior research.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128547700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VMP360","authors":"Yuxuan Pan, Xikang Jiang, Wei Quan, Lin Zhang","doi":"10.1145/3423328.3423500","DOIUrl":"https://doi.org/10.1145/3423328.3423500","url":null,"abstract":"Containing full panoramic content in a single frame and providing immersive experience for users, 360° video has attracted great attention in industry and academia. Viewport-driven tiling schemes have been introduced in 360° video processing to provide high-quality video streaming. However, treating viewport as traditional streaming screen results in frequently rebuffer or quality distortion, leading to poor Quality of Experience (QoE) of schemes. In this paper, we propose Viewpoint Movement Perception 360° Video Streaming (VMP360), an adaptive 360° video streaming system that utilizes unique factors of 360° video perception quality of users to improve the overall QoE. By studying the relative moving speed and depth difference between the viewpoint and other content, the system evaluates the perceived quality distortion based on optical flow estimation. Taking QoE into account, a novel 360° video quality evaluation metric is defined as Optical-flow-based Peak Signal-to-Noise Ratio (OPSNR). Appling OPSNR to tiling process, VMP360 proposes a versatile-size tiling scheme, and further Reinforcement Learning (RL) is used to realize the Adaptive Bit Rate (ABR) selection of tiles. VMP360 is evaluated through the client-server streaming system with two prior schemes Pano and Plato. Statistics show that the proposed scheme can improve the quality of 360° video by 10.1% while maintaining same rebuffer ratio compared with the Pano and Plato, which confirms that VMP360 can provide a promising high QoE for 360° video streaming. The code of a prototype can be found in https://github.com/buptexplorers/OFB-VR.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128465633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual Characterization of 3D Graphical Contents based on Attention Complexity Measures","authors":"Mona Abid, Matthieu Perreira Da Silva, P. Callet","doi":"10.1145/3423328.3423498","DOIUrl":"https://doi.org/10.1145/3423328.3423498","url":null,"abstract":"This paper provides insights on how to perceptually characterize colored 3D Graphical Contents (3DGC). In this study, pre-defined viewpoints were considered to render static graphical objects. For perceptual characterization, we used visual attention complexity (VAC) measures. Considering a view-based approach to exploit the perceived information, an eye-tracking experiment was conducted using colored graphical objects. Based on the collected gaze data, we revised the VAC measure, suggested in 2D imaging context, and adapted it to 3DGC. We also provided an objective predictor that highly mimics the experimental attentional complexity information. This predictor can be useful in Quality of Experience (QoE) studies: to balance content selection when benchmarking 3DGC processing techniques (e.g., rendering, coding, streaming, etc.) for human panel studies or ad hoc key performance indicator, and also to optimize the user's QoE when rendering such contents.","PeriodicalId":402203,"journal":{"name":"Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132225528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}