Jiale Wei, Shanshan Wang, Qun Hao, Mengyao Liu, Yang Cheng
{"title":"Adaptive image selection method for focus stacking based on a low-level vision task-driven network and liquid lens.","authors":"Jiale Wei, Shanshan Wang, Qun Hao, Mengyao Liu, Yang Cheng","doi":"10.1364/AO.555601","DOIUrl":null,"url":null,"abstract":"<p><p>An all-in-focus ( AIF) image has been employed broadly in various fields such as microscopy imaging, medical imaging, and high-level vision tasks. Focus stacking is a key technology for merging AIF images. Considerable efforts have been made to reconstruct AIF images accurately. However, little attention has been paid to capturing focal stack images effectively. This paper proposes an adaptive image selection method for capturing focal stack images based on a low-level vision task-driven network and liquid lens. The proposed method can maintain the integral quality using the minimum number of focal stack images. The low-level vision task-driven network termed FocalAIF-Net consists of a two-branch FocalNet and an auxiliary low-level vision task AIFNet. The FocalNet can estimate the blur map and the focal map from a defocused image with its depth map. Various quantitative and qualitative evaluation results on three benchmark datasets show that our FocalAIF-Net network achieves acceptable generalization performance. Additionally, we employ a liquid lens to zoom swiftly under the guidance of the proposed decision algorithm during real-world experiments to verify the effectiveness of the proposed method. The results show that the focal stack acquired by our method has a strong capacity to merge a more accurate AIF image and consume less running time compared to that achieved with a common average interval by mechanical movement.</p>","PeriodicalId":101299,"journal":{"name":"Applied optics","volume":"64 10","pages":"2653-2662"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied optics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1364/AO.555601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
An all-in-focus ( AIF) image has been employed broadly in various fields such as microscopy imaging, medical imaging, and high-level vision tasks. Focus stacking is a key technology for merging AIF images. Considerable efforts have been made to reconstruct AIF images accurately. However, little attention has been paid to capturing focal stack images effectively. This paper proposes an adaptive image selection method for capturing focal stack images based on a low-level vision task-driven network and liquid lens. The proposed method can maintain the integral quality using the minimum number of focal stack images. The low-level vision task-driven network termed FocalAIF-Net consists of a two-branch FocalNet and an auxiliary low-level vision task AIFNet. The FocalNet can estimate the blur map and the focal map from a defocused image with its depth map. Various quantitative and qualitative evaluation results on three benchmark datasets show that our FocalAIF-Net network achieves acceptable generalization performance. Additionally, we employ a liquid lens to zoom swiftly under the guidance of the proposed decision algorithm during real-world experiments to verify the effectiveness of the proposed method. The results show that the focal stack acquired by our method has a strong capacity to merge a more accurate AIF image and consume less running time compared to that achieved with a common average interval by mechanical movement.