{"title":"增强运动线索自动深度提取2d到3d视频转换","authors":"Gustavo Alves, Eduardo A. B. da Silva","doi":"10.1109/ITS.2014.6947992","DOIUrl":null,"url":null,"abstract":"In this paper we present two methods of depth extraction for 2D-to-3D video conversion. One for a scene captured with a static camera and other for the case of a moving camera, both using information from the motion present on the scene. In the first method, temporal difference, morphological operations and a region filling technique are used to segment the moving objects and define the foreground. Moreover, analysis of how the detected regions vary over nearby frames is applied to ensure the temporal consistency. For the regions corresponding to background, depth values are obtained merging information from linear perspective and texture characteristics. The second method is applied when a dynamic background is detected. It requires an input sequence encoded with H.264, so the motion information extracted from the compressed bitstream is used to assign depth values for the entire scene.","PeriodicalId":359348,"journal":{"name":"2014 International Telecommunications Symposium (ITS)","volume":"195 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Enhanced motion cues for automatic depth extraction for 2D-to-3D video conversion\",\"authors\":\"Gustavo Alves, Eduardo A. B. da Silva\",\"doi\":\"10.1109/ITS.2014.6947992\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we present two methods of depth extraction for 2D-to-3D video conversion. One for a scene captured with a static camera and other for the case of a moving camera, both using information from the motion present on the scene. In the first method, temporal difference, morphological operations and a region filling technique are used to segment the moving objects and define the foreground. Moreover, analysis of how the detected regions vary over nearby frames is applied to ensure the temporal consistency. For the regions corresponding to background, depth values are obtained merging information from linear perspective and texture characteristics. The second method is applied when a dynamic background is detected. It requires an input sequence encoded with H.264, so the motion information extracted from the compressed bitstream is used to assign depth values for the entire scene.\",\"PeriodicalId\":359348,\"journal\":{\"name\":\"2014 International Telecommunications Symposium (ITS)\",\"volume\":\"195 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 International Telecommunications Symposium (ITS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITS.2014.6947992\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Telecommunications Symposium (ITS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITS.2014.6947992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhanced motion cues for automatic depth extraction for 2D-to-3D video conversion
In this paper we present two methods of depth extraction for 2D-to-3D video conversion. One for a scene captured with a static camera and other for the case of a moving camera, both using information from the motion present on the scene. In the first method, temporal difference, morphological operations and a region filling technique are used to segment the moving objects and define the foreground. Moreover, analysis of how the detected regions vary over nearby frames is applied to ensure the temporal consistency. For the regions corresponding to background, depth values are obtained merging information from linear perspective and texture characteristics. The second method is applied when a dynamic background is detected. It requires an input sequence encoded with H.264, so the motion information extracted from the compressed bitstream is used to assign depth values for the entire scene.