{"title":"SIQ288:图像质量研究的显著性数据集","authors":"Wei Zhang, Hantao Liu","doi":"10.1109/MMSP.2016.7813334","DOIUrl":null,"url":null,"abstract":"Saliency modelling for image quality research has been an active topic in multimedia over the last five years. Saliency aspects have been added to many image quality metrics (IQMs) to improve their performance in predicting perceived quality. However, challenges to optimising the performance of saliency-based IQMs remain. To make further progress, a better understanding of human attention deployment in relation to image quality through eye-tracking experimentation is indispensable. Collecting substantial eye-tracking data is often confronted with a bias due to the involvement of massive stimulus repetition that typically occurs in an image quality study. To mitigate this problem, we proposed a new experimental methodology with dedicated control mechanisms, which allows collecting more reliable eye-tracking data. We recorded 5760 trials of eye movements from 160 human observers. Our dataset consists of 288 images representing a large degree of variability in terms of scene content, distortion type as well as degradation level. We illustrate how saliency is affected by the variations of image quality. We also compare state of the art saliency models in terms of predicting where people look in both original and distorted scenes. Our dataset helps investigate the actual role saliency plays in judging image quality, and provides a benchmark for gauging saliency models in the context of image quality.","PeriodicalId":113192,"journal":{"name":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"SIQ288: A saliency dataset for image quality research\",\"authors\":\"Wei Zhang, Hantao Liu\",\"doi\":\"10.1109/MMSP.2016.7813334\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Saliency modelling for image quality research has been an active topic in multimedia over the last five years. Saliency aspects have been added to many image quality metrics (IQMs) to improve their performance in predicting perceived quality. However, challenges to optimising the performance of saliency-based IQMs remain. To make further progress, a better understanding of human attention deployment in relation to image quality through eye-tracking experimentation is indispensable. Collecting substantial eye-tracking data is often confronted with a bias due to the involvement of massive stimulus repetition that typically occurs in an image quality study. To mitigate this problem, we proposed a new experimental methodology with dedicated control mechanisms, which allows collecting more reliable eye-tracking data. We recorded 5760 trials of eye movements from 160 human observers. Our dataset consists of 288 images representing a large degree of variability in terms of scene content, distortion type as well as degradation level. We illustrate how saliency is affected by the variations of image quality. We also compare state of the art saliency models in terms of predicting where people look in both original and distorted scenes. Our dataset helps investigate the actual role saliency plays in judging image quality, and provides a benchmark for gauging saliency models in the context of image quality.\",\"PeriodicalId\":113192,\"journal\":{\"name\":\"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)\",\"volume\":\"65 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2016.7813334\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2016.7813334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SIQ288: A saliency dataset for image quality research
Saliency modelling for image quality research has been an active topic in multimedia over the last five years. Saliency aspects have been added to many image quality metrics (IQMs) to improve their performance in predicting perceived quality. However, challenges to optimising the performance of saliency-based IQMs remain. To make further progress, a better understanding of human attention deployment in relation to image quality through eye-tracking experimentation is indispensable. Collecting substantial eye-tracking data is often confronted with a bias due to the involvement of massive stimulus repetition that typically occurs in an image quality study. To mitigate this problem, we proposed a new experimental methodology with dedicated control mechanisms, which allows collecting more reliable eye-tracking data. We recorded 5760 trials of eye movements from 160 human observers. Our dataset consists of 288 images representing a large degree of variability in terms of scene content, distortion type as well as degradation level. We illustrate how saliency is affected by the variations of image quality. We also compare state of the art saliency models in terms of predicting where people look in both original and distorted scenes. Our dataset helps investigate the actual role saliency plays in judging image quality, and provides a benchmark for gauging saliency models in the context of image quality.