Xuebin Liu, Yuang Chen, Chongji Zhao, Jie Yang, Huan Deng
{"title":"前景-背景分离和去模糊超分辨率方法","authors":"Xuebin Liu, Yuang Chen, Chongji Zhao, Jie Yang, Huan Deng","doi":"10.1016/j.optlaseng.2024.108629","DOIUrl":null,"url":null,"abstract":"<div><div>The limited depth of field (DOF) inherent in cameras often results in defocused and blurry backgrounds when capturing images in large aperture mode. This not only leads to the loss of crucial background information but also impedes the efficient reconstruction of the background regions. Usually, super-resolution (SR) techniques struggle to produce high-quality results for images captured with large apertures. To enhance the reconstruction quality of defocused regions in large aperture images, a foreground-background separation and deblurring super-resolution (FBSDSR) method was proposed in this paper. Based on the idea of foreground-background separation processing, we first divide the large aperture image into a sharp foreground region (I<sub>f</sub>) and a blurry background region (I<sub>b</sub>) based on depth information. The background region (I<sub>b</sub>) is then deblurred using an end-to-end iterative filter adaptive network (IFAN). This deblurring process refocuses the background, ultimately restoring an image with sharp details throughout. Finally, the enhanced super-resolution generative adversarial networks (Real-ESRGAN) which specializes in images SR of realistic scenes was used to process the sharp all-in-focus image. This method results in high-quality reconstructions of both the foreground and background of large aperture images. The experimental results demonstrated that the proposed method achieved effective reconstruction of the entire large aperture images clearly, overcoming the limitations of existing methods that struggle to reconstruct defocused regions. This significantly enhances the quality and resolution of large aperture images. Specifically, when FBSDSR is integrated with Real-ESRGAN, the PSNR, LPIPS, NIQE, and hyperIQA metrics were improved by approximately 2.2 %, 45.1 %, 34.7 %, and 10.9 % respectively.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108629"},"PeriodicalIF":3.5000,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Foreground-background separation and deblurring super-resolution method\",\"authors\":\"Xuebin Liu, Yuang Chen, Chongji Zhao, Jie Yang, Huan Deng\",\"doi\":\"10.1016/j.optlaseng.2024.108629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The limited depth of field (DOF) inherent in cameras often results in defocused and blurry backgrounds when capturing images in large aperture mode. This not only leads to the loss of crucial background information but also impedes the efficient reconstruction of the background regions. Usually, super-resolution (SR) techniques struggle to produce high-quality results for images captured with large apertures. To enhance the reconstruction quality of defocused regions in large aperture images, a foreground-background separation and deblurring super-resolution (FBSDSR) method was proposed in this paper. Based on the idea of foreground-background separation processing, we first divide the large aperture image into a sharp foreground region (I<sub>f</sub>) and a blurry background region (I<sub>b</sub>) based on depth information. The background region (I<sub>b</sub>) is then deblurred using an end-to-end iterative filter adaptive network (IFAN). This deblurring process refocuses the background, ultimately restoring an image with sharp details throughout. Finally, the enhanced super-resolution generative adversarial networks (Real-ESRGAN) which specializes in images SR of realistic scenes was used to process the sharp all-in-focus image. This method results in high-quality reconstructions of both the foreground and background of large aperture images. The experimental results demonstrated that the proposed method achieved effective reconstruction of the entire large aperture images clearly, overcoming the limitations of existing methods that struggle to reconstruct defocused regions. This significantly enhances the quality and resolution of large aperture images. Specifically, when FBSDSR is integrated with Real-ESRGAN, the PSNR, LPIPS, NIQE, and hyperIQA metrics were improved by approximately 2.2 %, 45.1 %, 34.7 %, and 10.9 % respectively.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"184 \",\"pages\":\"Article 108629\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816624006079\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816624006079","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
Foreground-background separation and deblurring super-resolution method
The limited depth of field (DOF) inherent in cameras often results in defocused and blurry backgrounds when capturing images in large aperture mode. This not only leads to the loss of crucial background information but also impedes the efficient reconstruction of the background regions. Usually, super-resolution (SR) techniques struggle to produce high-quality results for images captured with large apertures. To enhance the reconstruction quality of defocused regions in large aperture images, a foreground-background separation and deblurring super-resolution (FBSDSR) method was proposed in this paper. Based on the idea of foreground-background separation processing, we first divide the large aperture image into a sharp foreground region (If) and a blurry background region (Ib) based on depth information. The background region (Ib) is then deblurred using an end-to-end iterative filter adaptive network (IFAN). This deblurring process refocuses the background, ultimately restoring an image with sharp details throughout. Finally, the enhanced super-resolution generative adversarial networks (Real-ESRGAN) which specializes in images SR of realistic scenes was used to process the sharp all-in-focus image. This method results in high-quality reconstructions of both the foreground and background of large aperture images. The experimental results demonstrated that the proposed method achieved effective reconstruction of the entire large aperture images clearly, overcoming the limitations of existing methods that struggle to reconstruct defocused regions. This significantly enhances the quality and resolution of large aperture images. Specifically, when FBSDSR is integrated with Real-ESRGAN, the PSNR, LPIPS, NIQE, and hyperIQA metrics were improved by approximately 2.2 %, 45.1 %, 34.7 %, and 10.9 % respectively.
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques