Ali Bozorgian;Marius Pedersen;Jean-Baptiste Thomas;Mohamed-Chaker Larabi
{"title":"Subjective Quality Assessment of Foveated Omnidirectional Images in Virtual Reality","authors":"Ali Bozorgian;Marius Pedersen;Jean-Baptiste Thomas;Mohamed-Chaker Larabi","doi":"10.1109/OJID.2025.3556364","DOIUrl":null,"url":null,"abstract":"This study presents a novel dataset called “Foveated Omnidirectional Image Quality Assessment” (FOIQA) for the subjective quality evaluation of foveated 2D omnidirectional images. This dataset addresses the limitations of existing datasets by leveraging a high-resolution head-mounted display and a gaze-contingent evaluation approach. We provide individual opinion scores, mean opinion scores, and gaze data associated with both the test and reference images. The utility of our dataset is validated by benchmarking two existing objective foveated image quality metrics. Our results demonstrate that incorporating gaze data into the evaluation framework improves the accuracy of one of the tested objective metrics. The dataset is publicly available at <uri>https://doi.org/10.5281/zenodo.14009106</uri>.","PeriodicalId":100634,"journal":{"name":"IEEE Open Journal on Immersive Displays","volume":"2 ","pages":"5-16"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10945651","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal on Immersive Displays","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10945651/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study presents a novel dataset called “Foveated Omnidirectional Image Quality Assessment” (FOIQA) for the subjective quality evaluation of foveated 2D omnidirectional images. This dataset addresses the limitations of existing datasets by leveraging a high-resolution head-mounted display and a gaze-contingent evaluation approach. We provide individual opinion scores, mean opinion scores, and gaze data associated with both the test and reference images. The utility of our dataset is validated by benchmarking two existing objective foveated image quality metrics. Our results demonstrate that incorporating gaze data into the evaluation framework improves the accuracy of one of the tested objective metrics. The dataset is publicly available at https://doi.org/10.5281/zenodo.14009106.