Martin Köppel, Xi Wang, D. Doshkov, T. Wiegand, P. Ndjiki-Nya
{"title":"多视点视频加深度格式中咬合的一致时空填充","authors":"Martin Köppel, Xi Wang, D. Doshkov, T. Wiegand, P. Ndjiki-Nya","doi":"10.1109/MMSP.2012.6343410","DOIUrl":null,"url":null,"abstract":"Depth image-based rendering (DIBR) techniques allow for a wide variety of 3-D applications, including synthesizing additional virtual views in a multiview-video-plus-depth (MVD) representation. The MVD format consists of scene texture and depth information for a limited number of original views of the same scene. One of the main obstacles in the DIBR technique lies in the disocclusion problem which results from the fact that a scene can only be observed from a set of original views. This can lead to missing information in the generated virtual views, especially in extrapolation scenarios. Our work describes a novel algorithm that synthesizes such disoccluded textures. The proposed synthesizer enhances the visual experience by taking spatial and temporal video information into account. In order to compensate for global motion in sequences, image registration is incorporated into the framework. Objective and subjective gains are shown compared to three state-of-the-art approaches.","PeriodicalId":325274,"journal":{"name":"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Consistent spatio-temporal filling of disocclusions in the multiview-video-plus-depth format\",\"authors\":\"Martin Köppel, Xi Wang, D. Doshkov, T. Wiegand, P. Ndjiki-Nya\",\"doi\":\"10.1109/MMSP.2012.6343410\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Depth image-based rendering (DIBR) techniques allow for a wide variety of 3-D applications, including synthesizing additional virtual views in a multiview-video-plus-depth (MVD) representation. The MVD format consists of scene texture and depth information for a limited number of original views of the same scene. One of the main obstacles in the DIBR technique lies in the disocclusion problem which results from the fact that a scene can only be observed from a set of original views. This can lead to missing information in the generated virtual views, especially in extrapolation scenarios. Our work describes a novel algorithm that synthesizes such disoccluded textures. The proposed synthesizer enhances the visual experience by taking spatial and temporal video information into account. In order to compensate for global motion in sequences, image registration is incorporated into the framework. Objective and subjective gains are shown compared to three state-of-the-art approaches.\",\"PeriodicalId\":325274,\"journal\":{\"name\":\"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MMSP.2012.6343410\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2012.6343410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Consistent spatio-temporal filling of disocclusions in the multiview-video-plus-depth format
Depth image-based rendering (DIBR) techniques allow for a wide variety of 3-D applications, including synthesizing additional virtual views in a multiview-video-plus-depth (MVD) representation. The MVD format consists of scene texture and depth information for a limited number of original views of the same scene. One of the main obstacles in the DIBR technique lies in the disocclusion problem which results from the fact that a scene can only be observed from a set of original views. This can lead to missing information in the generated virtual views, especially in extrapolation scenarios. Our work describes a novel algorithm that synthesizes such disoccluded textures. The proposed synthesizer enhances the visual experience by taking spatial and temporal video information into account. In order to compensate for global motion in sequences, image registration is incorporated into the framework. Objective and subjective gains are shown compared to three state-of-the-art approaches.