M. P. Tehrani, T. Yendo, T. Fujii, K. Takeda, K. Mase, M. Tanimoto
{"title":"Integration of 3D audio and 3D video for FTV","authors":"M. P. Tehrani, T. Yendo, T. Fujii, K. Takeda, K. Mase, M. Tanimoto","doi":"10.1109/3DTV.2009.5069681","DOIUrl":null,"url":null,"abstract":"We developed an FTV system to process, and display information of 3D scene in realtime, in which users freely control their own viewpoint/listening-point position. Free listening-point can be generated by either (i) ray-space representation of sound wave field (source sound independent), or (ii) by acoustic transfer function estimation (source sound dependent) and blind separation of sources of sounds. Free viewpoint generation is based on ray-space method, which is enhanced by using multipass dynamic programming. Integration is done by either (i) ray-space representation of sound wave and images together, or (ii) integrating each camera video signal and acoustic transfer function of the same location as integrated 3DAV data. The prototype system of integrated audio-visual viewer achieves both good image and sound qualities in realtime.","PeriodicalId":230128,"journal":{"name":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DTV.2009.5069681","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We developed an FTV system to process, and display information of 3D scene in realtime, in which users freely control their own viewpoint/listening-point position. Free listening-point can be generated by either (i) ray-space representation of sound wave field (source sound independent), or (ii) by acoustic transfer function estimation (source sound dependent) and blind separation of sources of sounds. Free viewpoint generation is based on ray-space method, which is enhanced by using multipass dynamic programming. Integration is done by either (i) ray-space representation of sound wave and images together, or (ii) integrating each camera video signal and acoustic transfer function of the same location as integrated 3DAV data. The prototype system of integrated audio-visual viewer achieves both good image and sound qualities in realtime.