Jianzhou Huang, B. Hu, Guo-cui Wang, Weiguang Liu, Zongyuan Wang, Yan Zhang, Juan Liu
{"title":"Terahertz focusing enhancement for graphene-based tunable metalens","authors":"Jianzhou Huang, B. Hu, Guo-cui Wang, Weiguang Liu, Zongyuan Wang, Yan Zhang, Juan Liu","doi":"10.1117/12.2618749","DOIUrl":"https://doi.org/10.1117/12.2618749","url":null,"abstract":"We propose a type of hybrid structure metalens composed of a graphene-loaded metallic metasurface sandwiched by two mutually orthogonal gratings, which can work in transmission modes for dynamic terahertz wavefront manipulation with tunability and enhanced focusing efficiency. Experimental results show that due to the multi-reflection between the metasurface layer and the grating layers, the focusing efficiency is enhanced by 1.8 times, and the focal length of the metalens is increased by 0.61mm by increasing the applied gate voltage on the graphene from 0V to 1.4V. We hope the proposed structure may open a new avenue for reconfigurable THz metasurfaces with high efficiencies.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126137052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuyun Liu, Hong Wang, Yutong Jiang, Zhonglin Yang, Zhiyang Ma
{"title":"A convolutional neural network based complex scene classification framework using transfer deep combined convolutional activations","authors":"Shuyun Liu, Hong Wang, Yutong Jiang, Zhonglin Yang, Zhiyang Ma","doi":"10.1117/12.2618661","DOIUrl":"https://doi.org/10.1117/12.2618661","url":null,"abstract":"In many scene classification applications, the variety of surface objects, high within-category diversity and between-category similarity carry challenges for the classification Framework. Most of CNN-based classification methods only extract image features from a single network layer, which may cause the completed image information difficult to extract in complex scenes. We propose a novel transfer deep combined convolutional activations (TDCCA) to integrate both the low-level and high-level features. Extensive comparative experiments are conducted on UC Merced database, Aerial Image database and NWPU-RESISC45 database. The results reveal that our proposed TDCCA achieves higher experimental accuracies than other up-to-date popular methods.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"12277 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and analysis of an optical zoom system-based on super-elastic film liquid lens","authors":"Zhaoyang Liu, Huajie Hong, Zihao Gan, Yangyang Hou, Zhe Zeng","doi":"10.1117/12.2616337","DOIUrl":"https://doi.org/10.1117/12.2616337","url":null,"abstract":"As a new type of zoom element, the liquid lens has the characteristics of fast response, no wear, and small size, which brings new ideas to the design of the zoom system. This paper proposes a liquid-solid composite zoom optical system without mechanical movement. In the system, two liquid lenses are used as the zoom group and the compensation group of the system, so that the image plane position is always stable during the zooming process of the system. Based on the Gaussian theory, the relationship between the focal length of the system and the focal length of a single liquid lens is derived, and a multi-layer traversal algorithm is proposed to solve the initial structure parameters of the system. The optimized design of the zoom optical system and image quality analysis was completed by using ZEMAX software. The results show that the system can achieve continuous zoom in the range of 20-60mm, the zoom ratio of the system is 3, and the image quality is good. The modulation transfer function of each focal length of the zoom system at a spatial frequency of 50lp/mm is greater than 0.2, and the RMS value of wave aberration is greater than 0.3 times the wavelength of the probe light, which meets the design requirements.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117128986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method of improving registration accuracy of infrared and visible images","authors":"Bingyi Xiong, Junsheng Shi, Huaqiang Wang, Jing Liang, Ruimin Huang","doi":"10.1117/12.2616454","DOIUrl":"https://doi.org/10.1117/12.2616454","url":null,"abstract":"Fusion of infrared (IR) and visible sensor images is the widely applied. The registration is the basis for sensor fusion and fusion methods are very sensitive to the level of registration accuracy. However, the different imaging systems of the both lead to quite different image characteristics in a same scene and significant misalignment due to differences in field of view, lens distortion and other camera characteristics. So that, the registering accurately the both of sensor images is very challenging. A method to improve the accuracy of image registration is proposed in this paper. The method is based a self-designed long-wave IR/visible dual-band imaging system for capturing simultaneously both of images, which is not only with synchronous focusing and optical registration as much as possible, but also with a sliding potentiometer that record the voltage corresponding to the focusing distance. At first, an affine transformation of the registration at several different distances is acquired with two calibration board images captured self-designed calibration board by the dual-band imaging system. Then, the affine transformation matrices corresponding to the several finite distance are interpolated to obtain more affine transformation matrices within a certain distance, and an accurate look-up table is established. Last, when the dual-band imaging system is working, the current focal length is read out according to the sliding potentiometer in the system, and then the corresponding affine transformation matrix is searched for image registration. The proposed method is evaluated by comparing the deviation of the corresponding feature point coordinates on both of calibration board images before and after registration. Experimental results show that the proposed method can improve the registration accuracy of IR and visible images at different distances.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130730275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a new pixel LED automobile headlamp","authors":"Lei Liu, Lin Li","doi":"10.1117/12.2611963","DOIUrl":"https://doi.org/10.1117/12.2611963","url":null,"abstract":"Because LED has the advantages of energy saving, environmental protection, long service life and fast response time for traditional light sources, LED is gradually replacing the position of traditional light sources in the field of lighting, and LED is more and more widely used in automobile headlights. However, there are some safety problems during vehicle driving. For example, the glare of strong high beam lights on oncoming vehicles and passers-by at night has become the main cause of many traffic accidents. The emergence of ADB (adaptive driving beam) has reduced the occurrence of such traffic accidents to a certain extent, However, compared with pixel car headlights, it still has some limitations. For example, its controllable precision is not high enough, and its ability to deal with some more complex environments is not enough. At the same time, due to its projection ability, pixel headlights can not only remind drivers of driving safety, but also help other traffic participants, and their safety has been greatly improved. This paper introduces a new type of intelligent automobile headlamp based on pixel led. In this automobile headlamp, five LEDs with 1024 pixels are used, which are divided into imaging area and lighting area. The light distribution lens matching each area is designed. At the same time, the brightness and brightness of each pixel LED are controlled through circuit design, so as to realize the projection imaging of traffic signs and the light type transformation of high beam anti glare and low beam. Pixel LED car headlights have more accurate control accuracy and higher safety than general adaptive car headlights.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126593357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Terahertz modulation devices based on patterned laser-induced graphene","authors":"Zongyuan Wang, B. Hu","doi":"10.1117/12.2620041","DOIUrl":"https://doi.org/10.1117/12.2620041","url":null,"abstract":"Laser-induced graphene (LIG) has received extensive attention due to its excellent properties such as high electrical conductivity, high thermal stability and electrical conductivity, simple synthesis, and low manufacturing cost of patterned structures. However, most research on LIG has focused on electrical applications. In this work, we first examine the influence of the substrate on the LIG generated on polyimide, and then fabricate patterned LIG structures, including gratings and Fresnel zone plates for terahertz (THz) wave modulation. The function of the structure is proved through the experiment of the terahertz focal plane imaging system. It is expected that the LIG-based structure can widen the application of THz technology.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114394587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Wang, Liquan Dong, Lingqin Kong, Ming Liu, Mei Hui, Yuejin Zhao
{"title":"Optical coherence tomography microvascular imaging method based on svOCT and cmOCT","authors":"Juan Wang, Liquan Dong, Lingqin Kong, Ming Liu, Mei Hui, Yuejin Zhao","doi":"10.1117/12.2611625","DOIUrl":"https://doi.org/10.1117/12.2611625","url":null,"abstract":"OCT is a new imaging technology based on the principle of Michelson interferometer to perform tomography of sample tissue. OCT has the characteristics of high-resolution, non-invasive, real-time detection, and non-destructive imaging. The OCT vascular imaging method can generate high-contrast microvascular images without injecting a contrast agent. This paper presents a new microvascular imaging method (sv-cmOCT) that combines two existing microvascular imaging methods - speckle variance optical coherence tomography (svOCT) and correlation mapping optical coherence tomography (cmOCT). The binarised image of svOCT is used to mask the cmOCT image, which can effectively suppress background noise and static artefacts, and ultimately generate a microvascular image with high contrast. In this paper, blood flow mimic experiments and animal experiments were designed to validate the imaging performance of the microvascular imaging method. It is demonstrated that the proposed method can improve the signal-to-noise ratio of microvascular images well compared to methods such as svOCT and cmOCT and B-scan mask cmOCT. The study of microvascular imaging methods will greatly help to improve the understanding of the pathology of some diseases and thus may improve the clinical interpretation of human diseases!","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123462447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on PTZ tracking control system based on kinematics feedforward algorithm","authors":"Zeyuan Ji, Qingliang Jiao, Ming Liu","doi":"10.1117/12.2610750","DOIUrl":"https://doi.org/10.1117/12.2610750","url":null,"abstract":"We propose a kinematics feedforward and PI controller combination algorithm. The proposed algorithm measures the speed of the target in the image coordinate system to calculate the normalized position and speed of the target, and uses the kinematics calculation of the target speed as a feedforward. This strategy improves the tracking accuracy of the PTZ. We designed a feedforward PI control simulation and hardware system test platform, and finally performed simulation and experimental verification. The results show that this article Compared with the traditional PTZ controller vision feedback control strategy, the proposed method has better dynamic performance and higher tracking accuracy.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122380887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia Zhang, Bing-min Hu, Ying Lu, Hu Huang, Zongyuan Wang, Yue Gu, Benchang Zheng, Jinlei Ren, Jun Li
{"title":"Research status of collaborative detection of battlefield situation and its development trend in intelligent battlefield","authors":"Jia Zhang, Bing-min Hu, Ying Lu, Hu Huang, Zongyuan Wang, Yue Gu, Benchang Zheng, Jinlei Ren, Jun Li","doi":"10.1117/12.2620696","DOIUrl":"https://doi.org/10.1117/12.2620696","url":null,"abstract":"Battlefield situational awareness is the core condition that determines the success or failure of the battlefield, and it is also an important application direction of photodetectors. The rapid development of AI technology in recent years is about to cause major changes in future wars. The new AI battlefield will also put forward new urgent needs for situational awareness. This article summarizes the current main modes of collaborative detection of battlefield situation awareness and its research status, including radar / infrared composite detection, multi-source data fusion of radar / infrared detection, cooperative target recognition, target tracking, etc. On this basis, combined with the current development trend of the intelligence level of the main battlefield equipment, we get the development needs of future intelligent battlefield situational awareness for new types of collaborative detection, including requirements for its style, angle, speed, and detection targets of distributed collaborative detection. Based on this, the key development directions and core issues to be solved for intelligent battlefield situational awareness in the future are proposed.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117329056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Terahertz optical logic calculation based on diffraction neural network","authors":"Chenjie Xiong, B. Hu","doi":"10.1117/12.2619618","DOIUrl":"https://doi.org/10.1117/12.2619618","url":null,"abstract":"Based on the diffraction neural network, a small-volume diffractive optical element for optical logic calculation in the terahertz band is designed. The optical logic calculation ability and calculation accuracy of the diffraction neural network are calculated. The results show that after training, the diffraction neural network can accurately calculate eight kinds of optical logic and has 100% recognition accuracy when the number of network layers is greater than 5. This article mainly introduces the design principle and calculation process of the diffraction neural network, and compares the recognition accuracy of different network layers.","PeriodicalId":201899,"journal":{"name":"International Conference on Optical Instruments and Technology","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117281302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}