Michael Seufert, Ondrej Zach, T. Hossfeld, M. Slanina, P. Tran-Gia
{"title":"Impact of test condition selection in adaptive crowdsourcing studies on subjective quality","authors":"Michael Seufert, Ondrej Zach, T. Hossfeld, M. Slanina, P. Tran-Gia","doi":"10.1109/QoMEX.2016.7498939","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498939","url":null,"abstract":"Adaptive crowdsourcing is a new approach to crowdsourced Quality of Experience (QoE) studies, which aims to improve the certainty of resulting QoE models by adaptively distributing a fixed budget of user ratings to the test conditions. The main idea of the adaptation is to dynamically allocate the next rating to a condition, for which the submitted ratings so far show a low certainty. This paper investigates the effects of statistical adaptation on the distribution of ratings and the goodness of the resulting QoE models. Thereby, it gives methodological advice how to select test conditions for future crowdsourced QoE studies.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"49 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84092454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Training listeners for multi-channel audio quality evaluation in MUSHRA with a special focus on loop setting","authors":"Nadja Schinkel-Bielefeld","doi":"10.1109/QoMEX.2016.7498952","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498952","url":null,"abstract":"Audio quality evaluation for audio material of intermediate and high quality requires expert listeners. In comparison to non-experts, these are not only more critical in their ratings, but also employ different strategies in their evaluation. In particular they concentrate on shorter sections of the audio signal and compare more to the reference than inexperienced listeners. We created a listener training for detecting coding artifacts in multi-channel audio quality evaluation. Our training is targeted at listeners without technical background. For this training, expert listeners commented on smaller sections of an audio signal they focused on in the listening test and provided a description of the artifacts they perceived. The non-expert listeners participating in the training were provided with general advice for helpful strategies in MUSHRA tests (Multi Stimulus Tests with Hidden Reference and Anchor), with the comments on specific sections of the stimulus by the experts, and with feedback after rating. Listener's performance improved in the course of the training session. Afterwards they performed the same test without the training material and a further test with different items. Performance did not decrease in these tests, showing that they could transfer what they had learned to other stimuli. After the training they also set more loops and compared more to the reference.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"104 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91016931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"No-reference image quality assessment based on statistics of Local Ternary Pattern","authors":"P. Freitas, W. Y. L. Akamine, Mylène C. Q. Farias","doi":"10.1109/QoMEX.2016.7498959","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498959","url":null,"abstract":"In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"53 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77756516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Romain Cohendet, Anne-Laure Gilet, Matthieu Perreira Da Silva, P. Callet
{"title":"Using individual data to characterize emotional user experience and its memorability: Focus on gender factor","authors":"Romain Cohendet, Anne-Laure Gilet, Matthieu Perreira Da Silva, P. Callet","doi":"10.1109/QoMEX.2016.7498969","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498969","url":null,"abstract":"Delivering the same digital image to several users is not necessarily providing them the same experience. In this study, we focused on how different affective experiences impact the memorability of an image. Forty-nine participants took part in an experiment in which they saw a stream of images conveying various emotions. One day later, they had to recognize the images displayed the day before and rate them according to the positivity/ negativity of the emotional experience the images induced. In order to better appreciate the underlying idiosyncratic factors that affect the experience under test, prior to the test session we collected not only personal information but also results of psychological tests to characterize individuals according to their dominant personality in terms of masculinity-femininity (Bem Sex Role Inventory) and to measure their emotional state. The results show that the way an emotional experience is rated depends on personality rather than biological sex, suggesting that personality could be a mediator in the well-established differences in how males and females experience emotional material. From the collected data, we derive a model including individual factors relevant to characterize the memorability of the images, in particular through the emotional experience they induced.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"10 24 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79522927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Studying user agreement on aesthetic appeal ratings and its relation with technical knowledge","authors":"Pierre R. Lebreton, A. Raake, M. Barkowsky","doi":"10.1109/QoMEX.2016.7498934","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498934","url":null,"abstract":"In this paper, a crowdsourcing experiment was conducted involving different panels of participants. The aim of this study is to evaluate how the preference of one image over another one is related with the knowledge of the participant in photography. In previous work the two discriminant evaluation concepts “presence of a main subject” and “exposure” were found to distinguish group participants with different degrees of knowledge in photography. Each of these groups provided different means of aesthetic appeal ratings when asked to rate on an absolute category scale. The present paper extends previous work by studying preference ratings on a set of image pairs as a function of technical knowledge and more specifically adding a focus on the variance of rating and agreement between participants. The conducted study was composed of two different steps where the participants had to first report their preference of one image over another (paired comparison), and an evaluation of the technical background of the participant using a specific set of images. Based on preference-rating patterns groups of participants were identified. These groups were formed by clustering the participants who saw and shared the same preference rating on images in one group, and the participants with low agreement with other participants in another group. A per-group analysis showed that a high agreement between participants could be observed when participants have technical knowledge. This indicates that higher consistency between participants can be reached when expert users are being recruited, and therefore participants should be carefully selected in image aesthetic appeal evaluation to ensure stable results.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"20 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77465004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating color difference measures in images","authors":"Benhur Ortiz Jaramillo, A. Kumcu, W. Philips","doi":"10.1109/QoMEX.2016.7498922","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498922","url":null,"abstract":"The most well known and widely used method for comparing two homogeneous color samples is the CIEDE2000 color difference formula because of its strong agreement with human perception. However, the formula is unreliable when applied over images and its spatial extensions have shown little improvement compared with the original formula. Hence, researchers have proposed many methods intending to measure color differences (CDs) in natural scene color images. However, these existing methods have not yet been rigorously compared. Therefore, in this work we review and evaluate CD measures with the purpose of answering the question to what extent do state-of-the-art CD measures agree with human perception of CDs in images? To answer the question, we have reviewed and evaluated eight state-of-the-art CD measures on a public image quality database. We found that the CIEDE2000, its spatial extension and the just noticeable CD measure perform well in computing CDs in images distorted by black level shift and color quantization algorithms (correlation higher than 0.8). However, none of the tested CD measures perform well on identifying CDs for the variety of color related distortions tested in this work, e.g., most of the tested CD measures showed a correlation lower than 0.65.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"9 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90412485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toinon Vigier, Yoann Baveye, J. Rousseau, P. Callet
{"title":"Visual attention as a dimension of QoE: Subtitles in UHD videos","authors":"Toinon Vigier, Yoann Baveye, J. Rousseau, P. Callet","doi":"10.1109/QoMEX.2016.7498924","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498924","url":null,"abstract":"With the ever-growing availability of multimedia content produced, broadcast and consumed worldwide, subtitling is becoming an essential service to quickly share understandable content. Simultaneously, the increased resolution of the ultra high definition (UHD) standard comes with wider screens and new viewing conditions. Services as the display of subtitles thus require adaptation to better fit the new induced viewing visual angle. This paper aims at evaluating quality of experience of subtitled movies in UHD to propose guidelines for the appearance of subtitles. From an eye-tracking experiment conducted on 68 observers and 30 video sequences, viewing behavior and visual saliency are analyzed with and without subtitles and for different subtitle styles. Various metrics based on eye-tracking data, such as the Reading Index for Dynamic Texts (RIDT), are computed to objectively measure the ease of reading and subtitle disturbance. The results mainly show that doubling the visual angle of subtitles from HD to UHD guarantees subtitle readability without compromising the enjoyment of the video content.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"32 5 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82764619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long Xu, Lin Ma, Zhuo Chen, Xianyou Zeng, Yihua Yan
{"title":"Perceptual image quality enhancement for solar radio image","authors":"Long Xu, Lin Ma, Zhuo Chen, Xianyou Zeng, Yihua Yan","doi":"10.1109/QoMEX.2016.7498933","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498933","url":null,"abstract":"In solar radio observation, the visualization of data is very important since it can more intuitively and clearly deliver interest information of solar radio activities to astronomers. As to visualization, we highly expect good visual quality of images/videos in favor of the discovery of solar radio events recorded by observation data. The existing imaging system cannot guarantee good visual quality of solar radio data visualization. In this paper, an image quality enhancement algorithm is developed to improve solar radio extreme ultraviolet (EUV) images from Solar Dynamics Observatory (SDO). Firstly, the guided filter is employed to smooth image, which outputs an image with good skeleton and edges. Since the fine structures of solar radio activities are embedded in high frequency components of a solar radio image, we propose a novel structure preserving filtering to amplify the different signal of original input image subtracting smoothed one. Afterwards, fusing the amplified details and smoothed one together, the final enhanced image is generated. The experimental results prove that the image quality is significantly improved by using the proposed image quality enhancement algorithm.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"11 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91163733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philippe Hanhart, Lukáš Krasula, P. Callet, T. Ebrahimi
{"title":"How to benchmark objective quality metrics from paired comparison data?","authors":"Philippe Hanhart, Lukáš Krasula, P. Callet, T. Ebrahimi","doi":"10.1109/QoMEX.2016.7498960","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498960","url":null,"abstract":"The procedures commonly used to evaluate the performance of objective quality metrics rely on ground truth mean opinion scores and associated confidence intervals, which are usually obtained via direct scaling methods. However, indirect scaling methods, such as the paired comparison method, can also be used to collect ground truth preference scores. Indirect scaling methods have a higher discriminatory power and are gaining popularity, for example in crowdsourcing evaluations. In this paper, we present how the classification errors, an existing analysis tool, can also be used with subjective preference scores. Additionally, we propose a new analysis tool based on the receiver operating characteristic analysis. This tool can be used to further assess the performance of objective metrics based on ground truth preference scores. We provide a MATLAB script with an implementation of the proposed tools and we show one example of application of the proposed tools.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"10 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91240277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video content analysis method for audiovisual quality assessment","authors":"Baris Konuk, Emin Zerman, G. Nur, G. Akar","doi":"10.1109/QoMEX.2016.7498965","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498965","url":null,"abstract":"In this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87412855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}