A preliminary study on multi-view video streaming over underwater acoustic networks

T. Fujihashi, Hai-Heng Ng, Ziyuan Pan, S. Saruwatari, H. Tan, T. Watanabe
{"title":"A preliminary study on multi-view video streaming over underwater acoustic networks","authors":"T. Fujihashi, Hai-Heng Ng, Ziyuan Pan, S. Saruwatari, H. Tan, T. Watanabe","doi":"10.1109/UT.2013.6519832","DOIUrl":null,"url":null,"abstract":"Several techniques to realize multi-view video streaming have been recently proposed for terrestrial wireless networks. Multi-view video allows users to watch object of interest from different angle and can be applied to many applications. Nonetheless, the existing techniques do not cater for high-latency networks, such as underwater acoustic networks. Due to the slow speed of sound in water, the response delay of multi-view video becomes longer, which in turn degrades users' experience. This paper proposes a new approach, called Feedback-based Multi-view video Streaming for Underwater Acoustic Networks (FMS-UAN) so as to reduce the response delay of multi-view video streaming. FMS-UAN has two unique features. First, a sender can transmit its predicted frames before receiving feedback frame from a receiver node. We exploit long propagation delay by allowing both sender and receiver to simultaneously transmit their respective frames to each other in a collision-free manner. Second, even if the prediction was wrong, the sender still can encode the correct frame area with transmitted frames to reduce the retransmission bit-rate.","PeriodicalId":354995,"journal":{"name":"2013 IEEE International Underwater Technology Symposium (UT)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Underwater Technology Symposium (UT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UT.2013.6519832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Several techniques to realize multi-view video streaming have been recently proposed for terrestrial wireless networks. Multi-view video allows users to watch object of interest from different angle and can be applied to many applications. Nonetheless, the existing techniques do not cater for high-latency networks, such as underwater acoustic networks. Due to the slow speed of sound in water, the response delay of multi-view video becomes longer, which in turn degrades users' experience. This paper proposes a new approach, called Feedback-based Multi-view video Streaming for Underwater Acoustic Networks (FMS-UAN) so as to reduce the response delay of multi-view video streaming. FMS-UAN has two unique features. First, a sender can transmit its predicted frames before receiving feedback frame from a receiver node. We exploit long propagation delay by allowing both sender and receiver to simultaneously transmit their respective frames to each other in a collision-free manner. Second, even if the prediction was wrong, the sender still can encode the correct frame area with transmitted frames to reduce the retransmission bit-rate.
水声网络中多视频流的初步研究
近年来,针对地面无线网络提出了几种实现多视频流的技术。多视图视频允许用户从不同的角度观看感兴趣的对象,可以应用于许多应用。尽管如此,现有的技术还不能满足高延迟网络的需求,比如水声网络。由于声音在水中传播速度较慢,导致多视点视频的响应延迟变长,从而降低了用户的体验。为了减少多视频流的响应延迟,提出了一种基于反馈的水声网络多视频流(FMS-UAN)方法。FMS-UAN有两个独特的功能。首先,发送方可以在收到接收方节点的反馈帧之前发送其预测帧。我们通过允许发送方和接收方同时以无碰撞的方式相互传输各自的帧来利用长传播延迟。其次,即使预测错误,发送方仍然可以用传输的帧对正确的帧区域进行编码,以降低重传比特率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信