Proceedings of the 1st Mile-High Video Conference最新文献

筛选
英文 中文
Efficient bitrate ladder construction for live video streaming 实时视频流的有效比特率阶梯结构
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517300
V. V. Menon, Hadi Amirpour, M. Ghanbari, C. Timmerer
{"title":"Efficient bitrate ladder construction for live video streaming","authors":"V. V. Menon, Hadi Amirpour, M. Ghanbari, C. Timmerer","doi":"10.1145/3510450.3517300","DOIUrl":"https://doi.org/10.1145/3510450.3517300","url":null,"abstract":"In live streaming applications, service providers generally use a bitrate ladder with fixed bitrate-resolution pairs instead of optimizing it per title to avoid the additional latency caused to find optimum bitrate-resolution pairs for every video content. This paper introduces an online bitrate ladder construction scheme for live video streaming applications. In this scheme, each target bitrate's optimized resolution is determined from any pre-defined set of resolutions using Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features for each video segment. Experimental results show that, on average, the proposed scheme yields significant bitrate savings while maintaining the same quality, compared to the HLS fixed bitrate ladder scheme without any noticeable additional latency in streaming.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134075181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Incapable capabilities - less is more: improving user experiences and interoperability for streaming media services 无能力-少即是多:改善用户体验和流媒体服务的互操作性
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517287
T. Stockhammer, C. Concolato
{"title":"Incapable capabilities - less is more: improving user experiences and interoperability for streaming media services","authors":"T. Stockhammer, C. Concolato","doi":"10.1145/3510450.3517287","DOIUrl":"https://doi.org/10.1145/3510450.3517287","url":null,"abstract":"This document reviews existing functionalities for media capability mechanisms in the streaming media space. It shows the multitude of existing functionalities and provides an overview on what is used in practice. We provide recommendations to implementers and industry fora on how we can improve media capability signalling including focussing (extensible, yet compact) model for signalling relevant capabilities, provide a simple mapping to device APIs and support the correct implementation of capabilities on devices.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121388407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delivering universal TV services in a multi-network and multi-device world with DVB-I 通过DVB-I在多网络和多设备的世界中提供通用电视服务
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517276
T. Biatek, M. Raulet, Patrice Angot, P. Gonon, C. Thienot, W. Hamidouche, Pascal Perrot, Julien Lemotheux
{"title":"Delivering universal TV services in a multi-network and multi-device world with DVB-I","authors":"T. Biatek, M. Raulet, Patrice Angot, P. Gonon, C. Thienot, W. Hamidouche, Pascal Perrot, Julien Lemotheux","doi":"10.1145/3510450.3517276","DOIUrl":"https://doi.org/10.1145/3510450.3517276","url":null,"abstract":"The TV landscape went through significant changes during the past decades. The traditional broadcast switched from analog to digital thanks to new modulation, transport and coding systems. In the meantime, internet brought new ways of experiencing TV services with more customization by creating the need of on-demand features, paving the way to the streaming world we know today. These ecosystems grown separately, with MPEG-TS centric broadcast application being developed by DVB/ATSC and broadband streaming applications being developed by GAFAM based on IP-protocols from IETF. This created a fragmentation in the audience in terms of usage (linear, on-demand), access-network (IPTV, broadcast, OTT & 4G-Lte/5G) and devices (TV, set-top-boxes, mobiles). Broadcasters and operators addressed this fragmentation by declining services in many flavors, leveraging various and non-homogeneous technologies which led to a complex video delivery infrastructure having a lot of redundancies. This increases the delivery cost significantly and represent an energetic waste in networks and datacenters. In this paper, a solution for universal TV service delivery is proposed, based on recently standardized DVB-I, and addressing OTT, IPTV and 4G-Lte/5G mobile networks. Recently, Versatile Video Coding (VVC) [1] has been added to the DVB toolbox as an enabler for new applications in the DVB ecosystem [2]. Beside, 3GPP-SA4 started characterization of video codecs for 5G applications, including VVC as a relevant compression technology [6]. VVC has been issued in mid-2021 and has been developed by JVET, a joint group of ITU-T and ISO/IEC. VVC has been designed to address a various kind of applications and formats through its design and provides around 50% of bandwidth saving compared to its predecessor HEVC [5] for a similar visual quality [5]. Thus, VVC is a relevant technology to address new use-cases including 8K, VR-360, gaming and augmented reality. Beside this multiplicity of codecs, the multiplicity of delivery network and devices brought new challenges. To address this fragmentation while maintaining the audience, DVB developed a new paradigm for media consumption in order to harmonize and make TV services universals: DVB-I [7]. DVB-I enables, through a centralized service-list, to access TV services in a network/device agnostic manner. The service list enables to describe in a universal way the access-networks and decoding capabilities, including prioritization aspects. This paper proposes a delivery architecture based on DVB-I enabling video services to reach any kind of devices (Set-top-boxes, smartphones, TVs), on various network coming from broadcast (DVB) to broadband (3GPP) worlds. The headend produces video bitstreams (HEVC/VVC) and packages the stream using Common Media Application Format (CMAF) [3] producing DVB-DASH compliant streams for delivery over IPTV, 4G-Lte, 5G and OTT. The DVB-MABR standard is leveraged as well as 5GMS in order to reach end-devices.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Update on the emerging versatile video coding (VVC) standard and its applications 新出现的多功能视频编码(VVC)标准及其应用的最新进展
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517315
B. Bross, M. Wien, J. Ohm, G. Sullivan, Yan Ye
{"title":"Update on the emerging versatile video coding (VVC) standard and its applications","authors":"B. Bross, M. Wien, J. Ohm, G. Sullivan, Yan Ye","doi":"10.1145/3510450.3517315","DOIUrl":"https://doi.org/10.1145/3510450.3517315","url":null,"abstract":"Finalized in July 2020, the Versatile Video Coding (VVC) standard has begun moving beyond the abstract world of standardization into a diverse range of systems and products [5--7]. This presentation will provide updated information about this important standard, including new information about recent developments since completion of this major standardization project. The standard was developed by an ITU-T/ISO/IEC Joint Video Experts Team (JVET) of the ISO/IEC MPEG and ITU-T VCEG working groups and has been formally approved and published as Recommendation H.266 by ITU-T and as International Standard ISO/IEC 23090-3 by ISO and IEC. Verification Testing of the capabilities of VVC has confirmed its major benefit in compression capability over previous standards for several key types of video content, including emerging applications such as ultra-high resolution and high dynamic range usage, screen content sharing and 360° immersive VR/AR/XR applications [8--10]. The presentation will include the discussion of these developments and also information on • Open-source and other software availability and its uses • Recent and upcoming deployments of products and services using VVC • Incorporation of VVC into system environments and related standards • A new second edition of VVC including an extension for high bit rate and high bit depth applications • Metadata support in VVC using the new VSEI standard • Explorations in JVET for potential future video coding technology beyond VVC Additional resources for further information will also be provided (e.g., [1--4]).","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126903749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multimedia streaming analytics: quo vadis? 多媒体流分析:现状?
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517321
Cise Midoglu, M. Avelino, Shri Hari Gopalakrishnan, S. Pham, P. Halvorsen
{"title":"Multimedia streaming analytics: quo vadis?","authors":"Cise Midoglu, M. Avelino, Shri Hari Gopalakrishnan, S. Pham, P. Halvorsen","doi":"10.1145/3510450.3517321","DOIUrl":"https://doi.org/10.1145/3510450.3517321","url":null,"abstract":"In today's complex OTT multimedia streaming ecosystem, the task of ensuring the best streaming experience to end-users requires extensive monitoring, and such monitoring information is relevant to various stakeholders including content providers, CDN providers, network operators, device vendors, developers, and researchers. Streaming analytics solutions address this need by aggregating performance information across streaming sessions, to be presented in ways that help improve the end-to-end delivery. In this paper, we provide an analysis of the state of the art in commercial streaming analytics solutions. We consider five products as representatives, and identify potential improvements with respect to terminology, QoE representation, standardization and interoperability, and collaboration with academia and the developer community.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126668682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach to testing seamless audio & video playback in CTA WAVE 在CTA WAVE中测试无缝音频和视频播放的新方法
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517283
Bob Campbell, Yan-Ting Jiang
{"title":"A novel approach to testing seamless audio & video playback in CTA WAVE","authors":"Bob Campbell, Yan-Ting Jiang","doi":"10.1145/3510450.3517283","DOIUrl":"https://doi.org/10.1145/3510450.3517283","url":null,"abstract":"CTA's Web Application Video Ecosystem (WAVE) project aims to improve how internet-delivered video and audio is handled on consumer electronics devices. This paper presents in further detail the mechanisms that are proposed to automatically verify requirements in the Device Playback Capabilities Specification [1]. Specifically, those requirements include observations that audio and video playback is seamless. A test approach using artefacts applied onto the source video and audio will be described, which has proved successful. For video, QR codes are used; for audio, white noise is added, and a cross-correlation algorithm applied. The test media with these artefacts applied, and the novel processing applied in a software \"observation framework\" component, form part of the test environment provided by WAVE to the ecosystem. These open-source tools together with off the shelf hardware afford the tester a means to compare the original content with a recorded capture from the device under test, which may include mobile, smart TVs and other media playback devices, and automatically assert whether the WAVE requirements are met.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124245836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning assisted real-time DASH video QoE estimation technique for encrypted traffic 机器学习辅助加密流量实时DASH视频QoE估计技术
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517312
R. Ul-Mustafa, C. E. Rothenberg
{"title":"Machine learning assisted real-time DASH video QoE estimation technique for encrypted traffic","authors":"R. Ul-Mustafa, C. E. Rothenberg","doi":"10.1145/3510450.3517312","DOIUrl":"https://doi.org/10.1145/3510450.3517312","url":null,"abstract":"With the recent rise of video traffic, it is imperative to ensure Quality of Experience (QoE). The increasing adoption of end-to-end encryption hampers any payload inspection method for QoE assessments. This poses an additional challenge for network operators to monitor DASH video QoE of a user, which by itself is tricky due to the adaptive behaviour of HTTP Adaptive Streaming (HAS) mechanisms. To tackle these issues, we present a time-slot (window) QoE experience detection method based on network level Quality of Service (QoS) features for encrypted traffic. The proposed method continuously extracts relevant QoE features for HTTP Adaptive Streaming (HAS) from encrypted stream in real-time fashion basically, packet size and arrival time in a time-slot of (1,2,3,4,5)-seconds. Then, we derive Inter Packet Gap (IPG) metrics from arrival time that result in three recursive flow features (EMA, DEMA, CUSUM) to estimate the objective QoE following the ITU-P.1203 standard. Finally, we compute (packet size, throughput) distributions into (10-90)-percentile within each time-slot along with other QoS features such as throughput and total packets. The proposed QoS features are lightweight and do not require any chunk-detection approaches to estimate QoE, significantly reducing the complexity of the monitoring approach, and potentially improving on generalization to different HAS algorithms. We use different Machine Learning (ML) classifiers to feed the QoS features and yield a QoE category (Less QoE, Good, Excellent) based on bitrate, resolution and stall. We achieve an accuracy of 79% on predicting QoE using all ABS algorithms. Our experimental evaluation framework is based on the Mininet-WiFi wireless network emulator replaying real 5G traces. The obtained results validate the proposed methods and show high accuracy of QoE estimation of encrypted DASH traffic.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124629717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimizing real-time video encoders with ML 优化实时视频编码器与ML
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517269
Nelson C. Francisco, J. L. Tanou
{"title":"Optimizing real-time video encoders with ML","authors":"Nelson C. Francisco, J. L. Tanou","doi":"10.1145/3510450.3517269","DOIUrl":"https://doi.org/10.1145/3510450.3517269","url":null,"abstract":"The main goal when designing video compression systems is to maximize video quality for a given bitrate (or achieve a target video quality at the lowest possible bitrate), all within well-defined processing resources. Since economic and environmental aspects often place strict constraints on those resources, defining the optimal encoder toolset to maximize compression efficiency within the available computational footprint becomes crucial.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"PC-19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120997870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quality assessment of video with film grain 带有胶片颗粒的视频质量评价
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517293
Kai Zeng, Hojatollah Yeaganeh, Zhou Wang
{"title":"Quality assessment of video with film grain","authors":"Kai Zeng, Hojatollah Yeaganeh, Zhou Wang","doi":"10.1145/3510450.3517293","DOIUrl":"https://doi.org/10.1145/3510450.3517293","url":null,"abstract":"Film grain noise originally arises from small metallic silver particles on processed photographic celluloid. Although modern digital video acquisition systems are capable of largely reducing noise, sometimes to nearly invisible levels, the look of cinematic film grain has not gone away. Instead, content creators often purposely introduce simulated film grain in post-production to emulate dust in the environment, enrich texture details, and develop a certain visual tone. Despite the artistic benefits, film grain has posed significant challenges to video delivery systems. Compressing and transmitting videos containing film grain noise is extremely costly due to the large number of bits required to encode the noisy pixels of much higher entropy than the typical visual content of the scene. Heavy compression may remove film grain, but meanwhile, remove meaningful texture content in the visual scene or deteriorate the artistic effect of the creator's intent. It also casts major challenges to quality control of video delivery systems, for which film grain-susceptible fidelity measures are highly desirable for measurement and optimization purposes. Here after describing the characteristics of film grain and its impact to video quality, we present a novel framework that unifies natural video quality assessment and creative intent friendly video quality assessment. We also demonstrate an instantiation of the framework in the context of film-grained content in terms of predicting the perception of different groups of subjects.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129963062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMCD at work with real-time, real-world data CMCD使用实时、真实的数据
Proceedings of the 1st Mile-High Video Conference Pub Date : 2022-03-01 DOI: 10.1145/3510450.3517268
William Q. Law, Sean McCarthy
{"title":"CMCD at work with real-time, real-world data","authors":"William Q. Law, Sean McCarthy","doi":"10.1145/3510450.3517268","DOIUrl":"https://doi.org/10.1145/3510450.3517268","url":null,"abstract":"This study examines some of the first production data obtained from deploying Common Media Client Data (CMCD) into production environments within a global content delivery network (CDN) and a global content distributor. It covers player integrations into Shaka, hls.js and dash.js, details of CDN support, handling of CMCD in a multi-CDN environment by a content distributor and the analysis and interpretation of the returned data.","PeriodicalId":122386,"journal":{"name":"Proceedings of the 1st Mile-High Video Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134274438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信