{"title":"一种基于合成视频学习的虚拟视场质量增强技术","authors":"D. M. Rahaman, M. Paul","doi":"10.1109/DICTA.2017.8227397","DOIUrl":null,"url":null,"abstract":"With the development of displaying techniques, free viewpoint video (FVV) system shows its potential to provide immersive perceptual feeling by changing viewpoints. To provide this luxury, a large number of high quality views have to be synthesised from limited number of viewpoints. However, in this process, a portion of the background is occluded by the foreground object in the generated synthesised videos. Recent techniques, i.e. view synthesized prediction using Gaussian model (VSPGM) and adaptive weighting between warped and learned foregrounds indicate that learning techniques may fill occluded areas almost correctly. However, these techniques use temporal correlation by assuming that original texture of the target viewpoint are already available to fill up occluded areas which is not a practical solution. Moreover, if a pixel position experiences foreground once during learning, the existing techniques considered it as foreground throughout the process. However, the actual fact is that after experiencing a foreground a pixel position can be background again. To address the aforementioned issues, in the proposed view synthesise technique, we apply Gaussian mixture modelling (GMM) on the output images of inverse mapping (IM) technique for further improving the quality of the synthesised videos. In this technique, the foreground and background pixel intensities are refined from adaptive weights of the output of inverse mapping and the pixel intensities from the corresponding model(s) of the GMM. This technique provides a better pixel correspondence, which improves 0.10~0.46dB PSNR compared to the IM technique.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A Novel Virtual View Quality Enhancement Technique through a Learning of Synthesised Video\",\"authors\":\"D. M. Rahaman, M. Paul\",\"doi\":\"10.1109/DICTA.2017.8227397\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of displaying techniques, free viewpoint video (FVV) system shows its potential to provide immersive perceptual feeling by changing viewpoints. To provide this luxury, a large number of high quality views have to be synthesised from limited number of viewpoints. However, in this process, a portion of the background is occluded by the foreground object in the generated synthesised videos. Recent techniques, i.e. view synthesized prediction using Gaussian model (VSPGM) and adaptive weighting between warped and learned foregrounds indicate that learning techniques may fill occluded areas almost correctly. However, these techniques use temporal correlation by assuming that original texture of the target viewpoint are already available to fill up occluded areas which is not a practical solution. Moreover, if a pixel position experiences foreground once during learning, the existing techniques considered it as foreground throughout the process. However, the actual fact is that after experiencing a foreground a pixel position can be background again. To address the aforementioned issues, in the proposed view synthesise technique, we apply Gaussian mixture modelling (GMM) on the output images of inverse mapping (IM) technique for further improving the quality of the synthesised videos. In this technique, the foreground and background pixel intensities are refined from adaptive weights of the output of inverse mapping and the pixel intensities from the corresponding model(s) of the GMM. This technique provides a better pixel correspondence, which improves 0.10~0.46dB PSNR compared to the IM technique.\",\"PeriodicalId\":194175,\"journal\":{\"name\":\"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA.2017.8227397\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2017.8227397","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Virtual View Quality Enhancement Technique through a Learning of Synthesised Video
With the development of displaying techniques, free viewpoint video (FVV) system shows its potential to provide immersive perceptual feeling by changing viewpoints. To provide this luxury, a large number of high quality views have to be synthesised from limited number of viewpoints. However, in this process, a portion of the background is occluded by the foreground object in the generated synthesised videos. Recent techniques, i.e. view synthesized prediction using Gaussian model (VSPGM) and adaptive weighting between warped and learned foregrounds indicate that learning techniques may fill occluded areas almost correctly. However, these techniques use temporal correlation by assuming that original texture of the target viewpoint are already available to fill up occluded areas which is not a practical solution. Moreover, if a pixel position experiences foreground once during learning, the existing techniques considered it as foreground throughout the process. However, the actual fact is that after experiencing a foreground a pixel position can be background again. To address the aforementioned issues, in the proposed view synthesise technique, we apply Gaussian mixture modelling (GMM) on the output images of inverse mapping (IM) technique for further improving the quality of the synthesised videos. In this technique, the foreground and background pixel intensities are refined from adaptive weights of the output of inverse mapping and the pixel intensities from the corresponding model(s) of the GMM. This technique provides a better pixel correspondence, which improves 0.10~0.46dB PSNR compared to the IM technique.