Opinion-unaware blind quality assessment of AI-generated omnidirectional images based on deep feature statistics

IF 2.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xuelin Liu, Jiebin Yan, Yuming Fang, Jingwen Hou
{"title":"Opinion-unaware blind quality assessment of AI-generated omnidirectional images based on deep feature statistics","authors":"Xuelin Liu,&nbsp;Jiebin Yan,&nbsp;Yuming Fang,&nbsp;Jingwen Hou","doi":"10.1016/j.jvcir.2025.104461","DOIUrl":null,"url":null,"abstract":"<div><div>The advancement of artificial intelligence generated content (AIGC) and virtual reality (VR) technologies have prompted AI-generated omnidirectional images (AGOI) to gradually into people’s daily lives. Compared to natural omnidirectional images, AGOIs exhibit traditional low-level technical distortions and high-level semantic distortions, which can severely affect the immersive experience for users in practical applications. Consequently, there is an urgent need for thorough research and precise evaluation of AGOI quality. In this paper, we propose a novel opinion-unaware (OU) blind quality assessment approach for AGOIs based on deep feature statistics. Specifically, we first transform the AGOIs in equirectangular projection (ERP) format into a set of six cubemap projection (CMP)-converted viewport images, and extract viewport-wise multi-layer deep features from the pre-trained neural network backbone. Based on the deep representations, the multivariate Gaussian (MVG) models are subsequently fitted. The individual quality score for each CMP-converted image is calculated by comparing it against the corresponding fitted pristine MVG model. The final quality score for a testing AGOI is then computed by aggregating these individual quality scores. We conduct comprehensive experiments using the existing AGOIQA database and the experimental results show that the proposed OU-BAGOIQA model outperforms current state-of-the-art OU blind image quality assessment models. The ablation study has also been conducted to validate the effectiveness of our method.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"110 ","pages":"Article 104461"},"PeriodicalIF":2.6000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325000756","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The advancement of artificial intelligence generated content (AIGC) and virtual reality (VR) technologies have prompted AI-generated omnidirectional images (AGOI) to gradually into people’s daily lives. Compared to natural omnidirectional images, AGOIs exhibit traditional low-level technical distortions and high-level semantic distortions, which can severely affect the immersive experience for users in practical applications. Consequently, there is an urgent need for thorough research and precise evaluation of AGOI quality. In this paper, we propose a novel opinion-unaware (OU) blind quality assessment approach for AGOIs based on deep feature statistics. Specifically, we first transform the AGOIs in equirectangular projection (ERP) format into a set of six cubemap projection (CMP)-converted viewport images, and extract viewport-wise multi-layer deep features from the pre-trained neural network backbone. Based on the deep representations, the multivariate Gaussian (MVG) models are subsequently fitted. The individual quality score for each CMP-converted image is calculated by comparing it against the corresponding fitted pristine MVG model. The final quality score for a testing AGOI is then computed by aggregating these individual quality scores. We conduct comprehensive experiments using the existing AGOIQA database and the experimental results show that the proposed OU-BAGOIQA model outperforms current state-of-the-art OU blind image quality assessment models. The ablation study has also been conducted to validate the effectiveness of our method.
基于深度特征统计的人工智能生成全向图像的无意见盲质量评估
人工智能生成内容(AIGC)和虚拟现实(VR)技术的进步,促使人工智能生成全方位图像(AGOI)逐渐进入人们的日常生活。与自然的全向图像相比,AGOIs表现出传统的低级技术扭曲和高级语义扭曲,在实际应用中严重影响用户的沉浸式体验。因此,迫切需要对AGOI质量进行深入的研究和准确的评价。本文提出了一种基于深度特征统计的AGOIs意见不感知盲质量评价方法。具体而言,我们首先将等矩形投影(ERP)格式的AGOIs转换为一组由6个立方体映射投影(CMP)转换的视口图像,并从预训练的神经网络骨干中提取基于视口的多层深度特征。基于深度表示,随后拟合多元高斯(MVG)模型。每个cmp转换图像的单个质量分数是通过将其与相应的拟合原始MVG模型进行比较来计算的。然后通过汇总这些单独的质量分数来计算测试AGOI的最终质量分数。我们利用现有的AGOIQA数据库进行了综合实验,实验结果表明,所提出的OU- bagoiqa模型优于目前最先进的OU盲图像质量评估模型。烧蚀实验也验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Visual Communication and Image Representation
Journal of Visual Communication and Image Representation 工程技术-计算机:软件工程
CiteScore
5.40
自引率
11.50%
发文量
188
审稿时长
9.9 months
期刊介绍: The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信