利用深度网络对对比度发生变化的彩色图像进行基于质量的成对盲排序

IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Aladine Chetouani , Muhammad Ali Qureshi , Mohamed Deriche , Azeddine Beghdadi
{"title":"利用深度网络对对比度发生变化的彩色图像进行基于质量的成对盲排序","authors":"Aladine Chetouani ,&nbsp;Muhammad Ali Qureshi ,&nbsp;Mohamed Deriche ,&nbsp;Azeddine Beghdadi","doi":"10.1016/j.image.2023.117059","DOIUrl":null,"url":null,"abstract":"<div><p><span><span><span>Next-generation multimedia networks are expected to provide systems and applications with top </span>Quality of Experience<span><span> (QoE) to users. To this end, robust quality evaluation metrics<span> are critical. Unfortunately, most current research focuses only on modeling and evaluating mainly distortions across the pipeline of multimedia networks. While distortions are important, it is also as important to consider the effects of enhancement and other manipulations of multimedia content, especially images and videos. In contrast to most existing works dedicated to evaluating image/video quality in its traditional context, very few research efforts have been devoted to Image Quality Enhancement Assessment (IQEA) and more specifically, Contrast Enhancement Evaluation (CEE). Our contribution fills this gap by proposing a pairwise ranking scheme for estimating and evaluating the </span></span>perceptual quality of image contrast change (contrast enhancement and/or contrast-distorted images) process. We propose a novel Deep Learning-based Blind Quality pairwise Ranking scheme for Contrast-Changed (Deep-BQRCC) images. This method provides an automatic pairwise ranking of a set of contrast-changed images. The proposed framework is based on using a pair of </span></span>Convolutional Neural Networks (CNN) together with a saliency-based attention model and a color-difference visual map. Extensive experiments were conducted to validate the effectiveness of the proposed workflow through an ablation analysis. Different combinations of CNN models and pooling strategies were analyzed. The proposed Deep-BQRCC approach was evaluated over three dedicated publicly available datasets. The experimental results showed an increase in performance within a range of </span><span><math><mrow><mn>3</mn><mtext>–</mtext><mn>10</mn></mrow></math></span>% compared to state-of-the-art IQEA measures.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"121 ","pages":"Article 117059"},"PeriodicalIF":3.4000,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Blind quality-based pairwise ranking of contrast changed color images using deep networks\",\"authors\":\"Aladine Chetouani ,&nbsp;Muhammad Ali Qureshi ,&nbsp;Mohamed Deriche ,&nbsp;Azeddine Beghdadi\",\"doi\":\"10.1016/j.image.2023.117059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span><span><span>Next-generation multimedia networks are expected to provide systems and applications with top </span>Quality of Experience<span><span> (QoE) to users. To this end, robust quality evaluation metrics<span> are critical. Unfortunately, most current research focuses only on modeling and evaluating mainly distortions across the pipeline of multimedia networks. While distortions are important, it is also as important to consider the effects of enhancement and other manipulations of multimedia content, especially images and videos. In contrast to most existing works dedicated to evaluating image/video quality in its traditional context, very few research efforts have been devoted to Image Quality Enhancement Assessment (IQEA) and more specifically, Contrast Enhancement Evaluation (CEE). Our contribution fills this gap by proposing a pairwise ranking scheme for estimating and evaluating the </span></span>perceptual quality of image contrast change (contrast enhancement and/or contrast-distorted images) process. We propose a novel Deep Learning-based Blind Quality pairwise Ranking scheme for Contrast-Changed (Deep-BQRCC) images. This method provides an automatic pairwise ranking of a set of contrast-changed images. The proposed framework is based on using a pair of </span></span>Convolutional Neural Networks (CNN) together with a saliency-based attention model and a color-difference visual map. Extensive experiments were conducted to validate the effectiveness of the proposed workflow through an ablation analysis. Different combinations of CNN models and pooling strategies were analyzed. The proposed Deep-BQRCC approach was evaluated over three dedicated publicly available datasets. The experimental results showed an increase in performance within a range of </span><span><math><mrow><mn>3</mn><mtext>–</mtext><mn>10</mn></mrow></math></span>% compared to state-of-the-art IQEA measures.</p></div>\",\"PeriodicalId\":49521,\"journal\":{\"name\":\"Signal Processing-Image Communication\",\"volume\":\"121 \",\"pages\":\"Article 117059\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing-Image Communication\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0923596523001418\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596523001418","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

下一代多媒体网络有望为用户提供最高体验质量(QoE)的系统和应用。为此,稳健的质量评估指标至关重要。遗憾的是,目前的大多数研究都只关注对多媒体网络管道中的主要失真进行建模和评估。失真固然重要,但考虑多媒体内容(尤其是图像和视频)的增强和其他处理效果也同样重要。与大多数致力于在传统背景下评估图像/视频质量的现有作品相比,很少有研究致力于图像质量增强评估(IQEA),更具体地说是对比度增强评估(CEE)。我们的研究填补了这一空白,提出了一种成对排序方案,用于估计和评估图像对比度变化(对比度增强和/或对比度失真图像)过程的感知质量。我们提出了一种新颖的基于深度学习的对比度变化图像质量盲对排序方案(Deep-BQRCC)。该方法可对一组对比度变化的图像进行自动配对排序。所提出的框架基于一对卷积神经网络(CNN),以及基于显著性的注意力模型和色差视觉地图。我们进行了广泛的实验,通过消融分析验证了所提工作流程的有效性。对 CNN 模型的不同组合和池化策略进行了分析。在三个专门的公开数据集上对所提出的 Deep-BQRCC 方法进行了评估。实验结果表明,与最先进的 IQEA 方法相比,该方法的性能提高了 3-10%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Blind quality-based pairwise ranking of contrast changed color images using deep networks

Next-generation multimedia networks are expected to provide systems and applications with top Quality of Experience (QoE) to users. To this end, robust quality evaluation metrics are critical. Unfortunately, most current research focuses only on modeling and evaluating mainly distortions across the pipeline of multimedia networks. While distortions are important, it is also as important to consider the effects of enhancement and other manipulations of multimedia content, especially images and videos. In contrast to most existing works dedicated to evaluating image/video quality in its traditional context, very few research efforts have been devoted to Image Quality Enhancement Assessment (IQEA) and more specifically, Contrast Enhancement Evaluation (CEE). Our contribution fills this gap by proposing a pairwise ranking scheme for estimating and evaluating the perceptual quality of image contrast change (contrast enhancement and/or contrast-distorted images) process. We propose a novel Deep Learning-based Blind Quality pairwise Ranking scheme for Contrast-Changed (Deep-BQRCC) images. This method provides an automatic pairwise ranking of a set of contrast-changed images. The proposed framework is based on using a pair of Convolutional Neural Networks (CNN) together with a saliency-based attention model and a color-difference visual map. Extensive experiments were conducted to validate the effectiveness of the proposed workflow through an ablation analysis. Different combinations of CNN models and pooling strategies were analyzed. The proposed Deep-BQRCC approach was evaluated over three dedicated publicly available datasets. The experimental results showed an increase in performance within a range of 310% compared to state-of-the-art IQEA measures.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Signal Processing-Image Communication
Signal Processing-Image Communication 工程技术-工程:电子与电气
CiteScore
8.40
自引率
2.90%
发文量
138
审稿时长
5.2 months
期刊介绍: Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following: To present a forum for the advancement of theory and practice of image communication. To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems. To contribute to a rapid information exchange between the industrial and academic environments. The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world. Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments. Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信