{"title":"具有像素级可解释性的人脸图像质量感知指标","authors":"Byungho Jo , In Kyu Park , Sungeun Hong","doi":"10.1016/j.neucom.2024.128780","DOIUrl":null,"url":null,"abstract":"<div><div>This paper tackles the shortcomings of image evaluation metrics in evaluating facial image quality. Conventional metrics do neither accurately reflect the unique attributes of facial images nor correspond with human visual perception. To address these issues, we introduce a novel metric designed specifically for faces, utilizing a learning-based adversarial framework. This framework comprises a generator for simulating face restoration and a discriminator for quality evaluation. Drawing inspiration from facial neuroscience studies, our metric emphasizes the importance of primary facial features, acknowledging that minor changes in the eyes, nose, and mouth can significantly impact perception. Another key limitation of existing image evaluation metrics is their focus on numerical values at the image level, without providing insight into how different areas of the image contribute to the overall assessment. Our proposed metric offers interpretability regarding how each region of the image is evaluated. Comprehensive experimental results confirm that our face-specific metric surpasses traditional general image quality assessment metrics for facial images, including both full-reference and no-reference methods. The code and models are available at <span><span>https://github.com/AIM-SKKU/IFQA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Perceptual metric for face image quality with pixel-level interpretability\",\"authors\":\"Byungho Jo , In Kyu Park , Sungeun Hong\",\"doi\":\"10.1016/j.neucom.2024.128780\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper tackles the shortcomings of image evaluation metrics in evaluating facial image quality. Conventional metrics do neither accurately reflect the unique attributes of facial images nor correspond with human visual perception. To address these issues, we introduce a novel metric designed specifically for faces, utilizing a learning-based adversarial framework. This framework comprises a generator for simulating face restoration and a discriminator for quality evaluation. Drawing inspiration from facial neuroscience studies, our metric emphasizes the importance of primary facial features, acknowledging that minor changes in the eyes, nose, and mouth can significantly impact perception. Another key limitation of existing image evaluation metrics is their focus on numerical values at the image level, without providing insight into how different areas of the image contribute to the overall assessment. Our proposed metric offers interpretability regarding how each region of the image is evaluated. Comprehensive experimental results confirm that our face-specific metric surpasses traditional general image quality assessment metrics for facial images, including both full-reference and no-reference methods. The code and models are available at <span><span>https://github.com/AIM-SKKU/IFQA</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224015510\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015510","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Perceptual metric for face image quality with pixel-level interpretability
This paper tackles the shortcomings of image evaluation metrics in evaluating facial image quality. Conventional metrics do neither accurately reflect the unique attributes of facial images nor correspond with human visual perception. To address these issues, we introduce a novel metric designed specifically for faces, utilizing a learning-based adversarial framework. This framework comprises a generator for simulating face restoration and a discriminator for quality evaluation. Drawing inspiration from facial neuroscience studies, our metric emphasizes the importance of primary facial features, acknowledging that minor changes in the eyes, nose, and mouth can significantly impact perception. Another key limitation of existing image evaluation metrics is their focus on numerical values at the image level, without providing insight into how different areas of the image contribute to the overall assessment. Our proposed metric offers interpretability regarding how each region of the image is evaluated. Comprehensive experimental results confirm that our face-specific metric surpasses traditional general image quality assessment metrics for facial images, including both full-reference and no-reference methods. The code and models are available at https://github.com/AIM-SKKU/IFQA.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.