A screening methodology for crowdsourcing video QoE evaluation

Louis Anegekuh, Lingfen Sun, E. Ifeachor
{"title":"A screening methodology for crowdsourcing video QoE evaluation","authors":"Louis Anegekuh, Lingfen Sun, E. Ifeachor","doi":"10.1109/GLOCOM.2014.7036964","DOIUrl":null,"url":null,"abstract":"Recently, crowdsourcing has emerged as a cheaper and quicker alternative to traditional laboratory based Quality of Experience (QoE) evaluation for video streaming services. Crowdsourcing is a process of recruiting anonymous members of the public to solve/perform different tasks without supervision. This approach of seeking solutions from the public has been enhanced by ubiquitous internet connectivity, low cost and the ease to recruit workers without geographical restrictions. Although crowdsourcing makes it possible to save cost and reach a large group of people to perform subjective video quality testing, challenges such as validity of results and the trustworthiness of workers still remain unresolved. In this paper, we attempt to address some of these challenges by developing a screening algorithm that is able to determine the validity and trustworthiness of crowdsourcing video QoE evaluation results. This algorithm is based on evaluator's extracted data (e.g. Network IP addresses, network devices and browser information, time spent on the crowdsourcing platform, date and grading patterns). To determine the performance of this algorithm, we carried out a separate controlled laboratory based subjective video quality testing. This test enables us to determine how screened crowdsourcing results correlate with lab based results. Preliminary results show that crowdsourcing results can be improved by up to 59% when the proposed screening algorithm is applied. Moreover, the results support our assertion that crowdsourcing can provide reliable video QoE measurements if proper screening is performed on the test results.","PeriodicalId":6492,"journal":{"name":"2014 IEEE Global Communications Conference","volume":"8 1","pages":"1152-1157"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOCOM.2014.7036964","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Recently, crowdsourcing has emerged as a cheaper and quicker alternative to traditional laboratory based Quality of Experience (QoE) evaluation for video streaming services. Crowdsourcing is a process of recruiting anonymous members of the public to solve/perform different tasks without supervision. This approach of seeking solutions from the public has been enhanced by ubiquitous internet connectivity, low cost and the ease to recruit workers without geographical restrictions. Although crowdsourcing makes it possible to save cost and reach a large group of people to perform subjective video quality testing, challenges such as validity of results and the trustworthiness of workers still remain unresolved. In this paper, we attempt to address some of these challenges by developing a screening algorithm that is able to determine the validity and trustworthiness of crowdsourcing video QoE evaluation results. This algorithm is based on evaluator's extracted data (e.g. Network IP addresses, network devices and browser information, time spent on the crowdsourcing platform, date and grading patterns). To determine the performance of this algorithm, we carried out a separate controlled laboratory based subjective video quality testing. This test enables us to determine how screened crowdsourcing results correlate with lab based results. Preliminary results show that crowdsourcing results can be improved by up to 59% when the proposed screening algorithm is applied. Moreover, the results support our assertion that crowdsourcing can provide reliable video QoE measurements if proper screening is performed on the test results.
众包视频QoE评价的筛选方法
最近,众包已经成为传统的基于实验室的视频流服务体验质量(QoE)评估的更便宜、更快捷的替代方案。众包是在没有监督的情况下招募匿名公众成员解决/执行不同任务的过程。无处不在的互联网连接、低廉的成本以及不受地域限制的招聘便利,增强了这种向公众寻求解决方案的方式。尽管众包可以节省成本,并让一大群人进行主观的视频质量测试,但结果的有效性和工作人员的可信度等挑战仍未得到解决。在本文中,我们试图通过开发一种筛选算法来解决这些挑战,该算法能够确定众包视频QoE评估结果的有效性和可信度。该算法基于评估者提取的数据(如网络IP地址、网络设备和浏览器信息、在众包平台上花费的时间、日期和评分模式)。为了确定该算法的性能,我们进行了一个独立的受控实验室的主观视频质量测试。这个测试使我们能够确定筛选的众包结果如何与实验室结果相关联。初步结果表明,采用本文提出的筛选算法后,众包结果可提高59%。此外,结果支持我们的断言,即如果对测试结果进行适当的筛选,众包可以提供可靠的视频QoE测量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信