International Conference on Optical Instruments and Technology最新文献

筛选
英文 中文
Squeeze-and-excitation blocks embedded YOLO model for fast target detection under poor imaging conditions 挤压和激励块嵌入YOLO模型,用于在不良成像条件下快速检测目标
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2618343
Shuyun Liu, Bo Zhao, Y. Wang, Mengqi Zhu, Huini Fu
{"title":"Squeeze-and-excitation blocks embedded YOLO model for fast target detection under poor imaging conditions","authors":"Shuyun Liu, Bo Zhao, Y. Wang, Mengqi Zhu, Huini Fu","doi":"10.1117/12.2618343","DOIUrl":"https://doi.org/10.1117/12.2618343","url":null,"abstract":"How to detect targets under poor imaging conditions is receiving significant attention in recent years. The accuracy of object recognition position and recall rate may decrease for the classical YOLO model under poor imaging conditions because targets and their backgrounds are hard to discriminate. We proposed the improved YOLOv3 model whose basic structure of the detector is based on darknet-53, which is an accurate but efficient network for image feature extraction. Then Squeeze-and-Excitation (SE) structure is integrated after non-linearity of convolution to collect spatial and channel-wise information within local receptive fields. To accelerate inference speed, Nvidia TenorRT 6.0 is deployed into on Nvidia Jetson series low power platform. Experiments results show that the improved model may greatly achieve the inference speed without significantly reducing the detection accuracy comparing with the classic YOLOv3 model and some other up-to-date popular methods.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123237748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhanced performance integral imaging 3D display method using quarter-overlapped microlens arrays 四分之一重叠微透镜阵列增强性能集成成像三维显示方法
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2619521
Nai-Qin Zhao, Juan Liu, Zifeng Zhao
{"title":"Enhanced performance integral imaging 3D display method using quarter-overlapped microlens arrays","authors":"Nai-Qin Zhao, Juan Liu, Zifeng Zhao","doi":"10.1117/12.2619521","DOIUrl":"https://doi.org/10.1117/12.2619521","url":null,"abstract":"An integral (II) imaging method using quarter-overlapped microlens arrays (QOMLA) is proposed to improve the display performance. The principle and the simulation of POMLA is analyzed by geometric optics and wave optics, and the optical experiments verify the enhancement of performance. POMLA can double the angular sampling density of II systems in each dimension to increase the spatial resolution, and is able to construct multiple central depth planes by adjusting the gap between the two layers to expand the depth of field. Moreover, POMLA is easier to be processed compared with the same lenslets size single-layer microlens array and reduces the processing cost.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116668103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A high through-put image colorimeter for ultra-high-resolution micro-led panel inspection 用于超高分辨率微型led面板检测的高通量图像色度计
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2614080
Wei Zhou, Jiang He, Xinyu Peng
{"title":"A high through-put image colorimeter for ultra-high-resolution micro-led panel inspection","authors":"Wei Zhou, Jiang He, Xinyu Peng","doi":"10.1117/12.2614080","DOIUrl":"https://doi.org/10.1117/12.2614080","url":null,"abstract":"With microLED panel technology quickly evolving to smaller pixel size and larger resolution, optical metrology is on-demand to support both design verification and process yield control by providing a solution with high resolving power high through-put and less calibration spectrum dependency. This paper reviews the trade-off between all conflicting factors, and discusses the calibration algorithm to remove the spectrum dependence, describes a novel imaging colorimeter which precisely attacks all above technology inconsistencies with the final goal of: single micron pixel resolving power by combination of optical resolution and digital imaging processing algorithm, large optical FOV to reduce number of frames to be captured for whole panel inspection, calibration algorithm to precisely transfer the true color and brightness information between NIST traceable light source without spectrum matching demand, and inherent final balanced high through-put image capturing.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129556162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-frame generative network for image super-resolution 图像超分辨率的多帧生成网络
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2612129
Q. Zhao, Liquan Dong, Ming Liu, Xuhong Chu, Qingliang Jiao, Bu Ning, Lingqin Kong, Yuejin Zhao, Mei Hui
{"title":"Multi-frame generative network for image super-resolution","authors":"Q. Zhao, Liquan Dong, Ming Liu, Xuhong Chu, Qingliang Jiao, Bu Ning, Lingqin Kong, Yuejin Zhao, Mei Hui","doi":"10.1117/12.2612129","DOIUrl":"https://doi.org/10.1117/12.2612129","url":null,"abstract":"In recent years, the field of image super-resolution has mainly focused on the single-image super-resolution (SISR) task, which is to estimate an HR image from a single LR input. Due to the ill-posed ness of the SISR problem, these methods are limited to increasing the high-frequency details of the image by learning the a priori of the image. And multi-frame super-resolution (MFSR) provides the possibility to reconstruct rich details using the spatial and temporal difference information between images. With the increasing popularity of array camera technology, this key advantage makes MFSR an important issue for practical applications. We propose a new structure to complete the task of multi-frame image super-resolution. Our network takes multiple noisy images as input and generates a denoised, super-resolution RGB image as output. First, we align the multi-frame images by estimating the dense pixel optical flow between the images, and construct an adaptive fusion module to fuse the information of all frames. Then we build a feature fusion network to simultaneously fuse the depth feature information of multiple LR images and the internal features of the initial high-resolution image. In order to evaluate real-world data, We use the BurstSR data set, which includes real images of smartphones and highresolution SLR cameras, to prove the effectiveness of the proposed multiframe image super-resolution algorithm.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D face reconstruction based on position map regression network for lesion analysis of port wine stains 基于位置图回归网络的三维人脸重建用于波特酒污渍损伤分析
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2613583
Deng Xiao, Ya Zhou, Yingyi Gui, Chenbo Dong, Jiacheng Han
{"title":"3D face reconstruction based on position map regression network for lesion analysis of port wine stains","authors":"Deng Xiao, Ya Zhou, Yingyi Gui, Chenbo Dong, Jiacheng Han","doi":"10.1117/12.2613583","DOIUrl":"https://doi.org/10.1117/12.2613583","url":null,"abstract":"The evaluation of port wine stain based on three-dimensional information can overcome the inaccuracy of twodimensional image evaluation methods commonly used in clinic. In this paper, an end-to-end multitasking method is designed for the application of 3D information acquisition of port wine stain. Based on deep learning and position map regression network, the reconstruction from 2D pictures to face 3D point cloud is realized. the facial information of patients with port wine stain is represented by UV position map recording 3D point information of the face, and the dense relationship between 3D points and points with semantic meaning in UV space is characterized with this method. The deep learning network framework based on Encoder-Decoder structure is used to complete unconstrained end-to-end face alignment and 3D face reconstruction, whose parameters are obtained by training the data set with lightweight CNN structure. In the process of neural network training and end-to-end unconstrained image facial reconstruction, each point on the UV position map can be assigned different weights, which can not only be used to improve the network performance in neural network training, but also be used to assign corresponding weights to the focus areas with different disease course in the three-dimensional information reconstruction of the focus area therefore the accuracy of the reconstruction results can be increased. With the help of this method, the three-dimensional reconstruction results can be quickly obtained from a single patient's face image, which can be used for subsequent accurate lesion information analysis and treatment.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130913672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-depth augmented reality head-up display system using holographic optical elements 采用全息光学元件的多深度增强现实平视显示系统
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2619460
Zhenlv Lv, Juan Liu, Liangfa Xu
{"title":"A multi-depth augmented reality head-up display system using holographic optical elements","authors":"Zhenlv Lv, Juan Liu, Liangfa Xu","doi":"10.1117/12.2619460","DOIUrl":"https://doi.org/10.1117/12.2619460","url":null,"abstract":"An augmented reality (AR) head-up display (HUD) system based on holographic optical elements (HOEs) with multiple depths, large area, high diffraction efficiency and a single picture generation unit (PGU) is proposed. Since HOEs has excellent wavelength selectivity and angle selectivity, as well as the ability to modulate complex light waves, HOEs can image the red, green and blue parts of the color image emitted by PGU on different depth planes. The experimental results show that the three HOEs of red, green, and blue clearly display images at different depths of 150cm, 500cm, and 1000cm, and the diffraction efficiencies are 75.2%, 73.1%, and 67.5%, respectively. The size of HOEs is 20cm×15cm. In addition, the field of view (FOV) and eye-box (EB) of the system are 12°×10° and 9.5cm×11.2cm.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High speed structured illumination microscopy based on compressed sensing: numerical simulation 基于压缩感知的高速结构照明显微镜:数值模拟
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2619610
Jiaqi Zeng, Zhang Chonglei, Shao Zongshuo, Xiaocong Yuan
{"title":"High speed structured illumination microscopy based on compressed sensing: numerical simulation","authors":"Jiaqi Zeng, Zhang Chonglei, Shao Zongshuo, Xiaocong Yuan","doi":"10.1117/12.2619610","DOIUrl":"https://doi.org/10.1117/12.2619610","url":null,"abstract":"In traditional optical microscopy imaging system, the resolution of time mainly depends on the detector’s detection speed, usually in millisecond or microsecond magnitude. While the spatial resolution is limited by the optical diffraction limit, the lateral resolution of ordinary microscopies generally only reaches 200nm.Just as biological structure has a wide spatial scale, biological living processes also have a broad time scale. When observing biological subcellular organelles, resolution and the speed of life activities should be considered. For the dynamic process, it’s meaningless to simply improve the spatial resolution without correspondingly increasing the imaging speed which should be no less than the movement speed of the observed object. Existing super-resolution or high-speed optical imaging is limited by the mutual constraints of spatial and time resolution, making it difficult to obtain both super-resolution and high-speed optical imaging. In order to break the constraints of this game and gain high-speed super-resolution images, the CS-SIM system combines SIM (structured illumination microscopy) and CSP (compressed sensing photography). Since they both are wide filed imaging and CSP is a passive receiving imaging technology, SIM and CSP have potential to combine closely and achieve super-resolution and highspeed imaging.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129232482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new multi-spectral image registration algorithm 一种新的多光谱图像配准算法
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2619451
Wu Kun, Bin Li, Zongxin Liu, Yifan Shen, Jing Hu
{"title":"A new multi-spectral image registration algorithm","authors":"Wu Kun, Bin Li, Zongxin Liu, Yifan Shen, Jing Hu","doi":"10.1117/12.2619451","DOIUrl":"https://doi.org/10.1117/12.2619451","url":null,"abstract":"In this paper, an EOH based multi-spectral image registration algorithm is proposed, which is robust to rotation and scale changes. The key points of EOH descriptor have no main direction, and the neighborhood size of key points is fixed, so it is not robust to rotation change and scale change. The existing multi-spectral image registration methods mainly use the gradient features of the neighborhood of key points, but the gradient information between multi-spectral images is not stable, resulting in the limited improvement effect of these methods. The method proposed in this paper uses mutual information measure to calculate the relative rotation angle between images, so as to determine the main direction of key points, calculate the key point descriptor according to the main direction of key points, and make the size of key point neighborhood change with the scale of key points. Experimental results show that the proposed method in this paper is more robust to rotation and scale change.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131577107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Active non-line-of-sight human pose estimation based on deep learning 基于深度学习的主动非视线人体姿态估计
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2610975
Qianqian Xu, Liquan Dong, Lingqin Kong, Yuejin Zhao, Ming Liu
{"title":"Active non-line-of-sight human pose estimation based on deep learning","authors":"Qianqian Xu, Liquan Dong, Lingqin Kong, Yuejin Zhao, Ming Liu","doi":"10.1117/12.2610975","DOIUrl":"https://doi.org/10.1117/12.2610975","url":null,"abstract":"Non-Line-of-Sight technology is to image objects that are hidden from the camera's view. It has a wide range of application prospects in robotic vision, national defense, remote sensing, medical imaging, and unmanned driving. Active Non-Lineof- Sight imaging mainly relies on time-resolved optical impulse responses. The Non-Line-of-Sight imaging system emits ultra-short light pulses to illuminate the diffuse reflection wall, and uses ultra-fast time-resolved single-photon detectors to collect multiple reflected photon information, thereby obtaining information in the hidden scene. Finally, various reconstruction algorithms are used to reconstruct the hidden scene. However, most of the existing reconstruction algorithms have the problems of slow reconstruction speed and fuzzy reconstruction results, especially in the aspect of human pose estimation. In this article, we describe a method of active Non-Line-of-Sight human pose estimation based on deep learning. In order to solve the problem of lack of deep learning data, we simulate large amounts of pseudo-transient images for the network, including various complex actions: walking, jumping, turning, bending back and forth, rotating, using the confocal Non-Line-of-Sight imaging model. And then we train the simulated transient images using light cones Transformation and U-net coding and decoding network structure. Finally, we examine the performance of our method on synthetic and experimental datasets. The prediction results show that our method can not only estimate the pose of real measured non-view human pose data, but also significantly improve the quality of reconstruction.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116015193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Combination of dot-matrix lighting and floodlighting for multipath interference suppression in ToF imaging 点阵照明与泛光照明相结合抑制ToF成像中的多径干扰
International Conference on Optical Instruments and Technology Pub Date : 2022-07-22 DOI: 10.1117/12.2619673
Xiaoyue Liu, Jiayi Huo, Huimin Yan
{"title":"Combination of dot-matrix lighting and floodlighting for multipath interference suppression in ToF imaging","authors":"Xiaoyue Liu, Jiayi Huo, Huimin Yan","doi":"10.1117/12.2619673","DOIUrl":"https://doi.org/10.1117/12.2619673","url":null,"abstract":"Time-of-flight (ToF) cameras obtain the depth information of the whole scene simultaneously by floodlighting it. In the presence of mutual reflection between the targets, the measurement would suffer from the multipath interference (MPI), because the signal received by the sensor is a mixture of direct component and multipath (or global) component. MPI would lead to significant measurement errors. In this paper is introduced a method of separating the direct and global components by once dot-matrix lighting and twice floodlighting with different frequencies. With dot-matrix lighting, the depth information at the dot matrix position is almost only determined by the direct component. The phase value of the direct component is readily calculated. The global component at the dot position with floodlighting is separated by minimizing the separation error through solving the nonlinear least square problem. Then the global component of the whole scene can be obtained by two-dimensional interpolation from data at dot matrix position. As a result, the direct component can be calculated by subtracting the interpolation result from the floodlighting signal, and the depth were calculated only from direct component. Semi-physical experiments were made for three kinds of scenes, blank corner with uniform reflectivity, the corner with different reflectivity areas and the corner with an object placed in front of it. The results demonstrate that the MPI has been significantly suppressed in all scenes. Especially in the area with strong MPI in the first two kinds of scenes, the measurement errors can decrease to about 10%~20%.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115685900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信