Lei Yang , Shengyuan Xu , Chunzheng Yang , Chenliang Chang , Qichao Hou , Qiang Song
{"title":"High-quality computer-generated holography based on Vision Mamba","authors":"Lei Yang , Shengyuan Xu , Chunzheng Yang , Chenliang Chang , Qichao Hou , Qiang Song","doi":"10.1016/j.optlaseng.2024.108704","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning, especially through model-driven unsupervised networks, offers a novel approach for efficient computer-generated hologram (CGH) generation. However, current model-driven CGH generation models are primarily built on the convolutional neural networks (CNNs), which struggle to achieve high-quality hologram reconstruction due to limited receptive fields. Although Vision Transformers (ViTs) excel at processing more distant visual information, they are burdened with huge computational load. The recent emergence of Vision Mamba (ViM) presents a promising avenue to address these challenges. In this study, we introduce the CVMNet, a lightweight model that combines the precision of convolutional layers for local feature extraction and the long-range modeling abilities of state-space models (SSMs) to enhance the quality of CGHs. By employing parallel computation for the ViM to handle feature channels, the CVMNet effectively reduces the number of model parameters. Numerical reconstruction and optical experiments demonstrate that the CVMNet can generate 1080P high-quality holograms in just 16 ms, boosting an average PSNR of over 30 dB and effectively suppressing speckle noise in reconstructed images. Additionally, the CVMNet showcases robust generalization capabilities.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108704"},"PeriodicalIF":3.5000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816624006821","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning, especially through model-driven unsupervised networks, offers a novel approach for efficient computer-generated hologram (CGH) generation. However, current model-driven CGH generation models are primarily built on the convolutional neural networks (CNNs), which struggle to achieve high-quality hologram reconstruction due to limited receptive fields. Although Vision Transformers (ViTs) excel at processing more distant visual information, they are burdened with huge computational load. The recent emergence of Vision Mamba (ViM) presents a promising avenue to address these challenges. In this study, we introduce the CVMNet, a lightweight model that combines the precision of convolutional layers for local feature extraction and the long-range modeling abilities of state-space models (SSMs) to enhance the quality of CGHs. By employing parallel computation for the ViM to handle feature channels, the CVMNet effectively reduces the number of model parameters. Numerical reconstruction and optical experiments demonstrate that the CVMNet can generate 1080P high-quality holograms in just 16 ms, boosting an average PSNR of over 30 dB and effectively suppressing speckle noise in reconstructed images. Additionally, the CVMNet showcases robust generalization capabilities.
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques