Michael Seufert, Ondrej Zach, T. Hossfeld, M. Slanina, P. Tran-Gia
{"title":"Impact of test condition selection in adaptive crowdsourcing studies on subjective quality","authors":"Michael Seufert, Ondrej Zach, T. Hossfeld, M. Slanina, P. Tran-Gia","doi":"10.1109/QoMEX.2016.7498939","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498939","url":null,"abstract":"Adaptive crowdsourcing is a new approach to crowdsourced Quality of Experience (QoE) studies, which aims to improve the certainty of resulting QoE models by adaptively distributing a fixed budget of user ratings to the test conditions. The main idea of the adaptation is to dynamically allocate the next rating to a condition, for which the submitted ratings so far show a low certainty. This paper investigates the effects of statistical adaptation on the distribution of ratings and the goodness of the resulting QoE models. Thereby, it gives methodological advice how to select test conditions for future crowdsourced QoE studies.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"49 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84092454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Training listeners for multi-channel audio quality evaluation in MUSHRA with a special focus on loop setting","authors":"Nadja Schinkel-Bielefeld","doi":"10.1109/QoMEX.2016.7498952","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498952","url":null,"abstract":"Audio quality evaluation for audio material of intermediate and high quality requires expert listeners. In comparison to non-experts, these are not only more critical in their ratings, but also employ different strategies in their evaluation. In particular they concentrate on shorter sections of the audio signal and compare more to the reference than inexperienced listeners. We created a listener training for detecting coding artifacts in multi-channel audio quality evaluation. Our training is targeted at listeners without technical background. For this training, expert listeners commented on smaller sections of an audio signal they focused on in the listening test and provided a description of the artifacts they perceived. The non-expert listeners participating in the training were provided with general advice for helpful strategies in MUSHRA tests (Multi Stimulus Tests with Hidden Reference and Anchor), with the comments on specific sections of the stimulus by the experts, and with feedback after rating. Listener's performance improved in the course of the training session. Afterwards they performed the same test without the training material and a further test with different items. Performance did not decrease in these tests, showing that they could transfer what they had learned to other stimuli. After the training they also set more loops and compared more to the reference.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"104 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91016931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"No-reference image quality assessment based on statistics of Local Ternary Pattern","authors":"P. Freitas, W. Y. L. Akamine, Mylène C. Q. Farias","doi":"10.1109/QoMEX.2016.7498959","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498959","url":null,"abstract":"In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"53 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77756516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Romain Cohendet, Anne-Laure Gilet, Matthieu Perreira Da Silva, P. Callet
{"title":"Using individual data to characterize emotional user experience and its memorability: Focus on gender factor","authors":"Romain Cohendet, Anne-Laure Gilet, Matthieu Perreira Da Silva, P. Callet","doi":"10.1109/QoMEX.2016.7498969","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498969","url":null,"abstract":"Delivering the same digital image to several users is not necessarily providing them the same experience. In this study, we focused on how different affective experiences impact the memorability of an image. Forty-nine participants took part in an experiment in which they saw a stream of images conveying various emotions. One day later, they had to recognize the images displayed the day before and rate them according to the positivity/ negativity of the emotional experience the images induced. In order to better appreciate the underlying idiosyncratic factors that affect the experience under test, prior to the test session we collected not only personal information but also results of psychological tests to characterize individuals according to their dominant personality in terms of masculinity-femininity (Bem Sex Role Inventory) and to measure their emotional state. The results show that the way an emotional experience is rated depends on personality rather than biological sex, suggesting that personality could be a mediator in the well-established differences in how males and females experience emotional material. From the collected data, we derive a model including individual factors relevant to characterize the memorability of the images, in particular through the emotional experience they induced.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"10 24 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79522927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Studying user agreement on aesthetic appeal ratings and its relation with technical knowledge","authors":"Pierre R. Lebreton, A. Raake, M. Barkowsky","doi":"10.1109/QoMEX.2016.7498934","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498934","url":null,"abstract":"In this paper, a crowdsourcing experiment was conducted involving different panels of participants. The aim of this study is to evaluate how the preference of one image over another one is related with the knowledge of the participant in photography. In previous work the two discriminant evaluation concepts “presence of a main subject” and “exposure” were found to distinguish group participants with different degrees of knowledge in photography. Each of these groups provided different means of aesthetic appeal ratings when asked to rate on an absolute category scale. The present paper extends previous work by studying preference ratings on a set of image pairs as a function of technical knowledge and more specifically adding a focus on the variance of rating and agreement between participants. The conducted study was composed of two different steps where the participants had to first report their preference of one image over another (paired comparison), and an evaluation of the technical background of the participant using a specific set of images. Based on preference-rating patterns groups of participants were identified. These groups were formed by clustering the participants who saw and shared the same preference rating on images in one group, and the participants with low agreement with other participants in another group. A per-group analysis showed that a high agreement between participants could be observed when participants have technical knowledge. This indicates that higher consistency between participants can be reached when expert users are being recruited, and therefore participants should be carefully selected in image aesthetic appeal evaluation to ensure stable results.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"20 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77465004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video content analysis method for audiovisual quality assessment","authors":"Baris Konuk, Emin Zerman, G. Nur, G. Akar","doi":"10.1109/QoMEX.2016.7498965","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498965","url":null,"abstract":"In this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87412855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Wallendael, Paulien Coppens, Tom Paridaens, Niels Van Kets, W. V. D. Broeck, P. Lambert
{"title":"Perceptual quality of 4K-resolution video content compared to HD","authors":"G. Wallendael, Paulien Coppens, Tom Paridaens, Niels Van Kets, W. V. D. Broeck, P. Lambert","doi":"10.1109/QoMEX.2016.7498935","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498935","url":null,"abstract":"With the introduction of 4K UHD video and display resolution, questions arise on the perceptual differences between 4K UHD and upsampled HD video content. In this paper, a striped pair comparison has been performed on a diverse set of 4K UHD video sources. The goal was to subjectively assess the perceived sharpness of 4K UHD and downscaled/upscaled HD video. A striped pair comparison has been applied in order to make the test as straightforward as possible for a non-expert participant population. Under these conditions and over this set of sequences, on average, on 54.8% of the sequences (17 out of 31), 4K UHD resolution content could be identified as being sharper compared to its HD down and upsampled alternative. The probabilities in which 4K UHD could be differentiated from downscaled/upscaled HD range from 83.3% for the easiest to assess sequence down to 39.7% for the most difficult sequence. Although significance tests demonstrate there is a positive sharpness difference from camera quality 4K UHD content compared to the HD downscaled/upscaled variations, it is very content dependent and all circumstances have been chosen in favor of the 4K UHD representation. The results of this test can contribute to the research process of developing metrics indicating visibility of high resolution features within specific content.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"30 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89742301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of perceptual redundancies of HEVC encoded dynamic textures","authors":"Karam Naser, V. Ricordel, P. Callet","doi":"10.1109/QoMEX.2016.7498931","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498931","url":null,"abstract":"Statistical redundancies have been the dominant target in the image/video compression standards. Perceptually, there exists further redundancies that can be removed to further enhance the compression efficiency. In this paper, we considered short term homogeneous patches that fall into the foveal vision as dynamic textures, for which a psychophysical test was used to estimate their amount of perceptual redundancies. We demonstrated the possible rate saving by utilizing these redundancies. We further designed a learning model that can precisely predict the amount of redundancies and accordingly proposed a generalized perceptual optimization framework.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"11 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80882697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharath Chandra Guntuku, S. Roy, Weisi Lin, Kelvin Ng, W. Ng, V. Jakhetiya
{"title":"Personalizing User Interfaces for improving quality of experience in VoD recommender systems","authors":"Sharath Chandra Guntuku, S. Roy, Weisi Lin, Kelvin Ng, W. Ng, V. Jakhetiya","doi":"10.1109/QoMEX.2016.7498940","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498940","url":null,"abstract":"Recommending content to users involves understanding a) what to present and b) how to present them, so as to increase quality of experience (QoE) and thereby, content consumption. This work attempts to address the question of how to present contents in a way so that the user finds it easy to get to desired content. While the process of User Interface (UI) design is dependent on several human factors, there are basic design components and their combination that have to be common to any recommender system user interface. Personalization of the UI design process involves picking the right components and their combination, and presenting a UI to suit the usage behavior of an individual user, so as to enhance the QoE. This work proposes a system that learns from a user's content consumption patterns and makes some recommendations regarding how to present the content for the user (in the context of Video-On-Demand/Live-TV services on Computer displays), so as to enhance the QoE of the recommender system.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"13 1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83473813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colm Sloan, N. Harte, D. Kelly, A. Kokaram, Andrew Hines
{"title":"Bitrate classification of twice-encoded audio using objective quality features","authors":"Colm Sloan, N. Harte, D. Kelly, A. Kokaram, Andrew Hines","doi":"10.1109/QoMEX.2016.7498956","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498956","url":null,"abstract":"When a user uploads audio files to a music streaming service, these files are subsequently re-encoded to lower bitrates to target different devices, e.g. low bitrate for mobile. To save time and bandwidth uploading files, some users encode their original files using a lossy codec. The metadata for these files cannot always be trusted as users might have encoded their files more than once. Determining the lowest bitrate of the files allows the streaming service to skip the process of encoding the files to bitrates higher than that of the uploaded files, saving on processing and storage space. This paper presents a model that uses quality predictions from ViSQOLAudio, a full reference objective audio quality metric, as features in combination with a multi-class support vector machine classifier. An experiment on twice-encoded files found that low bitrate codecs could be classified using audio quality features. The experiment also provides insights into the implications of multiple transcodes from a quality perspective.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"89 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78398305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}