Yusheng Zang , Changsheng Li , Jiajun Tang , Zhaoxiang Chen , Jianjun Ding , Shuming Yang , Zhuangde Jiang
{"title":"A composite surface registration method for freeform surface evaluation based on ICP coarse registration and PSO fine registration","authors":"Yusheng Zang , Changsheng Li , Jiajun Tang , Zhaoxiang Chen , Jianjun Ding , Shuming Yang , Zhuangde Jiang","doi":"10.1016/j.optlaseng.2025.109079","DOIUrl":"10.1016/j.optlaseng.2025.109079","url":null,"abstract":"<div><div>The assessment of form error for freeform surfaces is crucial for ensuring the machining accuracy of optical components and performing potential subsequent error correction. Although the traditional iterative closest point (ICP) algorithm is widely used for surface registration due to its simplicity and broad applicability, it has limitations such as sensitivity to initial optimization positions, which seriously affect the accuracy of registration. For this reason, this paper proposes a composite surface registration method based on ICP for coarse registration and particle swarm optimization (PSO) for fine registration. First, the influence of six mounting errors on the form accuracy is discussed through error sensitivity analysis. Second, the characteristics of the ICP algorithm and the PSO algorithm in surface registration are comparatively analyzed in aspects of accuracy and efficiency. Finally, the composite registration method is successfully applied to the surface registration of the freeform surfaces. The results show that the composite registration method has achieved 88.4 % and 1.4 % decrease in registration error compared to the ICP algorithm when processing the freeform suction cup and silicon mirror, respectively. Additionally, the computational time is reduced by 40 % and 20 % compared to the PSO algorithm, respectively. Even when compared to advanced hybrid registration algorithms combining genetic algorithms (GA) and ICP algorithms, the proposed method in this paper still maintains advantages in terms of registration accuracy and efficiency.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109079"},"PeriodicalIF":3.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structural analyses of additively manufactured carbon fiber reinforced polymer with embedded fiber optic using THz spectroscopy and micro-computed tomography","authors":"Magdalena Mieloszyk , Pawel Madejski , Sebastian Wroński , Isyna Izzal Muna","doi":"10.1016/j.optlaseng.2025.109086","DOIUrl":"10.1016/j.optlaseng.2025.109086","url":null,"abstract":"<div><div>This study examines the internal structure of additively manufactured (AM) carbon fiber reinforced polymer (CFRP) composites embedded with fiber optics with fiber Bragg grating (FBG) sensors using THz spectroscopy and micro-computed tomography (micro-CT). Due to the high conductivity of carbon fiber, the application of THz spectroscopy to CFRP faces significant challenges, necessitating careful optimization of inspection parameters. Conversely, micro-CT leverages its deeper penetration capabilities and high-resolution imaging to provide accurate and detailed internal imaging of CFRP composites. THz spectroscopy detects the influence of embedded fiber optics on AM CFRP structure while micro-CT excels by producing detailed 3D representations of the internal structure, effectively identifying the fiber optic precise location. These findings highlight the importance of selecting appropriate non-destructive testing (NDT) methods based on the specific material properties, demonstrating that micro-CT is an invaluable complementary tool to THz spectroscopy to achieve thorough assessment of CFRP composites in materials science, medicine, and engineering.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109086"},"PeriodicalIF":3.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongqian Chen , Ming Yang , Jing Zhang , Mengjian Zhang , Chengbin Liang , Deguang Wang
{"title":"Camera calibration based on hybrid differential evolution and crayfish optimization algorithm","authors":"Zhongqian Chen , Ming Yang , Jing Zhang , Mengjian Zhang , Chengbin Liang , Deguang Wang","doi":"10.1016/j.optlaseng.2025.109088","DOIUrl":"10.1016/j.optlaseng.2025.109088","url":null,"abstract":"<div><div>This study proposes a hybrid differential evolution and crayfish optimization algorithm (HDECOA) for precise camera calibration. HDECOA synergizes differential evolution, enhanced by an adaptive parameter control strategy, with crayfish optimization algorithm within a parallel and competitive framework. The hybrid algorithm achieves an effective balance between exploration and exploitation by improving population diversity and optimizing evolutionary efficiency. Finally, HDECOA is applied to calibrate two cameras with distinct parameters. Experimental comparisons evaluate the mean reprojection error of the proposed method against those of methods employing crayfish optimization algorithm, particle swarm optimization, differential evolution, sparrow search algorithm, and Zhang's method. <em>K</em>-means cluster analysis is utilized to evaluate reprojection errors and relative reprojection errors are calculated under varying levels of Gaussian noise. The proposed method achieves mean reprojection errors of 0.054 pixels and 0.166 pixels for the two cameras, respectively. Comprehensive experimental results reveal rapid convergence, high accuracy, robust performance, and versatility of the proposed method, highlighting its superiority over the comparison methods.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109088"},"PeriodicalIF":3.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zihan Yi , Naerzhuoli Madeniyeti , Yinong Zeng , Xiao-Nan Tao , Aiming Ge , Hui Zhao , Jian Qiu , Kefu Liu , Connie Chang-Hasnain
{"title":"Corrigendum to “Virtual line light source-based beam collimation for pulse-driven Gaussian-like VCSEL array emission” [Optics and Lasers in Engineering 190 (2025) 108970]","authors":"Zihan Yi , Naerzhuoli Madeniyeti , Yinong Zeng , Xiao-Nan Tao , Aiming Ge , Hui Zhao , Jian Qiu , Kefu Liu , Connie Chang-Hasnain","doi":"10.1016/j.optlaseng.2025.109004","DOIUrl":"10.1016/j.optlaseng.2025.109004","url":null,"abstract":"","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109004"},"PeriodicalIF":3.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144223137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape reconstruction from focus detection and signal classification using an optical microscope","authors":"Shaohang Wang","doi":"10.1016/j.optlaseng.2025.109080","DOIUrl":"10.1016/j.optlaseng.2025.109080","url":null,"abstract":"<div><div>Shape reconstruction from focus using a microscope is a cost-effective technique for restoring three-dimensional shapes, making it well-suited for high-precision measurement needs at microscopic scales. However, its effectiveness is limited by the variability of focus-measured signals and the alignment accuracy between corresponding image points. To address these limitations, a novel shape reconstruction method is proposed that integrates focus detection, the classification of focus-measured signals, and depth reconstruction. This method employs deep learning to categorize the geometric shapes of focus-measured signals, identify those characterized by noise, and perform depth reconstruction solely on the valid focus-measured signals that remain. Additionally, a straightforward calibration method for the posture angles of the vision-motion system is developed, ensuring that the reconstruction system produces aligned image sequences and is utilized alongside the proposed reconstruction method to achieve a high-quality depth map. Compared to conventional shape reconstruction methods that do not utilize the classification of focus-measured signals, the proposed method demonstrates significant advantages in reconstruction quality and accuracy. The results indicate that the proposed method achieves remarkable performance in signal classification, demonstrating an excellent ability to separate noise and minimize error in the depth map. This enables the generation of more accurate, high-quality depth maps. Moreover, the proposed method can learn and continuously improve its reconstruction performance through further training, effectively addressing the adverse effects of focus-measured signal variability on shape reconstruction. In summary, the proposed method not only creates high-quality depth maps for various precision measurements but also serves as the core technique for 3D digital microscopes.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109080"},"PeriodicalIF":3.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Magnetic sensing with orientation identification based on multiple self-mixing interference","authors":"Shaokun Huo , Wu Sun , Zhenning Huang","doi":"10.1016/j.optlaseng.2025.109087","DOIUrl":"10.1016/j.optlaseng.2025.109087","url":null,"abstract":"<div><div>The density detection and orientation identification of the magnetic field are crucial across various industrial sectors and scientific researches. In this work, we detected both the magnetic field density and its orientation by utilizing the polarized light passing through a TGG crystal to generate the multiple self-mixing interference with the Faraday effect. The spectral lines of the multiple self-mixing interference were experimentally obtained in the magnetic field ranging from -80.31 mT to 79.44 mT and the spectral lines exhibited opposite trends on the condition of an inverted magnetic field. The results were analyzed based on the decay coefficients of the spectral lines obtained via fitting, and they exhibited an upward or a downward trend when the orientation of the magnetic field was inverted.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109087"},"PeriodicalIF":3.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144071649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pin Lv , Fusheng Zha , Xiangji Wang , Rongchao Li , Mantian Li , Pengfei Wang , Wei Guo , Lining Sun
{"title":"UMIENet: Underwater image enhancement based on multi-degradation knowledge integration","authors":"Pin Lv , Fusheng Zha , Xiangji Wang , Rongchao Li , Mantian Li , Pengfei Wang , Wei Guo , Lining Sun","doi":"10.1016/j.optlaseng.2025.109069","DOIUrl":"10.1016/j.optlaseng.2025.109069","url":null,"abstract":"<div><div>Due to the absorption and scattering effects of water on light, underwater images generally suffer from multiple degradations such as blur, color cast, and non-uniform illumination, which severely affect image quality and visual processing tasks. Therefore, underwater image enhancement (UIE) has gained widespread application in various marine exploration tasks. While supervised learning-based methods currently dominate this field, existing methods have two main problems. The limited availability of real paired images and the incompleteness of the degradation types of synthetic datasets restricts the model training performance, and most UIE models are designed for specific types of degradation, lacking systematic processing of multiple underwater degradations. These problems lead to poor model performance. In this work, we construct an Underwater Multi-Degradation Knowledge Integration dataset, called UMDKI, it models multiple degradation factors including blur, color cast, and non-uniform illumination by incorporating a revised image formation model and point light mathematical modeling. Besides, we propose an Underwater Multi-degradation Image Enhancement Network, called UMIENet, it integrates the advantages of various traditional methods and achieves collaborative enhancement of multiple degradations. Extensive experiments demonstrate that the proposed UMIENet achieves excellent performance on multiple benchmarks and shows good effectiveness in real underwater vision tasks.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109069"},"PeriodicalIF":3.5,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhao Zhang , Yuanbo Wu , Xin Liu , Zhiwen Yan , Xinshun Zhao , Dong Xu , Greg Gbur , Bernhard J. Hoenders , Yangjian Cai , Jun Zeng
{"title":"Single-shot intensity-based measurement of high-order OAM in partially coherent vortex beams","authors":"Zhao Zhang , Yuanbo Wu , Xin Liu , Zhiwen Yan , Xinshun Zhao , Dong Xu , Greg Gbur , Bernhard J. Hoenders , Yangjian Cai , Jun Zeng","doi":"10.1016/j.optlaseng.2025.109085","DOIUrl":"10.1016/j.optlaseng.2025.109085","url":null,"abstract":"<div><div>Existing methods for measuring the orbital angular momentum (OAM) of partially coherent vortex beams face challenges like long collection times and intricate recovery algorithms, limiting them to low-order OAM and specific eigenmode beams. Here, we propose a single-shot intensity technique based on astigmatic phase modulation to measure OAM up to 30 in more general partially coherent vortex beams constructed from non-eigenmodes. This method transforms intensity into bright and dark fringes associated with OAM, accurately determining OAM magnitude and sign, even in turbulence. Our technique opens up a new route to high-speed and large-capacity optical communication.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109085"},"PeriodicalIF":3.5,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bingsong Cao , Zhangrong Mei , Yonghua Mao , Jinqi Song , Kaikai Huang
{"title":"High-contrast autofocusing in modified circular symmetric Airy beams via Fourier phase modulation","authors":"Bingsong Cao , Zhangrong Mei , Yonghua Mao , Jinqi Song , Kaikai Huang","doi":"10.1016/j.optlaseng.2025.109073","DOIUrl":"10.1016/j.optlaseng.2025.109073","url":null,"abstract":"<div><div>We proposed the modified circular symmetric Airy beam (MCSAB) in Fourier space with analytical expression and generated the beam via Fourier phase modulation. The autofocusing distance and focusing intensity distribution of MCSAB can be controlled through its parameters like the traditional circular Airy beam (CAB), and its autofocusing intensity contrast is sharply enhanced compared with CAB. In experiments, we proved that MCSAB is more energy-efficient for generation through a single phase-only spatial light modulator than traditional CAB and has enhanced autofocusing intensity contrast as well as enhanced autofocusing intensity. Experiments show good agreement with simulations. MCSAB may be promising in real applications for its high efficiency and enhanced autofocusing.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109073"},"PeriodicalIF":3.5,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic blur layer-based computer-generated holography","authors":"Zichun Le, Aoxin Fei, Xiangrui Duan, Shun Li","doi":"10.1016/j.optlaseng.2025.109081","DOIUrl":"10.1016/j.optlaseng.2025.109081","url":null,"abstract":"<div><div>Computer Generated Holography (CGH) based on deep learning has rapidly advanced, surpassing traditional physics-based methods that rely on optical wave simulations and signal processing. We propose a multilayer hologram generation model for generating 3D Phase-only holograms (POHs) using a layer-based approach. The 3D object is represented as multiple layers, with Gaussian convolution kernels applied to generate target images that simulate realistic blur effects for non-focal layers. The model utilizes learnable initial phases to train and optimize the blur effects across these layers. By taking amplitude and depth images as input, the proposed method is capable of synthesizing both 2D and 3D holograms with realistic blur effects. Both simulations and optical experiments demonstrate that the proposed method achieves exceptional hologram generation performance, with blur effects closely matching those observed in real-world scenarios.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"193 ","pages":"Article 109081"},"PeriodicalIF":3.5,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}