{"title":"Siamese network visual tracking algorithm based on online learning","authors":"Chengyue Zhang, Zhiqiang Hou, Pu Lei, Chen Lilin, Sugang Ma, Wangsheng Yu","doi":"10.12086/OEE.2021.200140","DOIUrl":"https://doi.org/10.12086/OEE.2021.200140","url":null,"abstract":"Visual tracking algorithm based on a Siamese network is an important method in the field of visual tracking in recent years, and it has good performance in tracking speed and accuracy. However, most tracking algorithms based on the Siamese network rely on an off-line training model and lack of online update to tracker. In order to solve this problem, we propose an online learning-based visual tracking algorithm for Siamese networks. The algorithm adopts the idea of double template, treats the target in the first frame as a static template, and uses the high confidence update strategy to obtain the dynamic template in the subsequent frame; in online tracking, the fast transform learning model is used to learn the apparent changes of the target from the double template, and the target likelihood probability map of the search area is calculated according to the color histogram characteristics of the current frame, and the background suppression learning is carried out. Finally, the response map obtained by the dual templates is weighted, and the final prediction result is obtained. The experimental results on OTB2015, TempleColor128, and VOT datasets show that the test results of this algorithm are improved compared with the mainstream algorithms in recent years and have better tracking performance in target deformation, similar background interference, fast motion, and other scenarios.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85719047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preparation method for polyimide films with imaging potential","authors":"Lv Gang, Yang Wei, Mao Danbo, Wu Shibin, Ren Ge","doi":"10.12086/OEE.2021.200381","DOIUrl":"https://doi.org/10.12086/OEE.2021.200381","url":null,"abstract":"Polyimide (PI) film is widely used in aerospace, microelectronics, and other fields because of its excellent thermal stability and mechanical strength. However, there are very few reports about its application in the direction of optical imaging. To use PI film for imaging, the requirements for the optical homogeneity of the PI film are extremely demanding. The optical homogeneity of the stretch-resistant PI film proposed in this paper with 100 mm diameter and low thermal expansion coefficient meets the Rayleigh criterion, which has the potential for applications in the imaging field. In addition, the tensile strength of this PI is 285 MPa, which is ~2.6 times that of the PMDA-ODA type PI; the coefficient of thermal expansion is about 3.2 ppmK-1, which is comparable to that of the Novastrat®905 type PI and is one order of magnitude lower than that of the commercial PI films. These excellent basic properties reserve more space to further improve the space adaptability of the PI film. The solution of the optical homogeneity of the PI film will lay the foundation for its application in thin film diffractive optical elements.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90931559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Yue, Xiang Huazhong, Zhao Renyan, Tu Jiankun, Zheng Gang
{"title":"Mode field diameter measurement of single mode fiber using Bessel function fitting method based on variable aperture in far field","authors":"H. Yue, Xiang Huazhong, Zhao Renyan, Tu Jiankun, Zheng Gang","doi":"10.12086/OEE.2021.200308","DOIUrl":"https://doi.org/10.12086/OEE.2021.200308","url":null,"abstract":"The mode field diameter is an important parameter of single-mode fiber, and the GB.15972.45-2008 recommends using the far-field variable aperture method to measure it. This paper analyzes the distribution of the propagating light field in a single-mode fiber. The mode behavior of the light field is the solution of the Helmholtz equation, which in theory should satisfy the Bessel function. In this regard, a method using Bessel function to fit the optical field distribution of the fiber based on the far-field variable aperture method is proposed, and the mode field diameter is calculated from the fitted mode field distribution curve. Compared with the commonly used far-field variable aperture method, when the measurement data is normal, this method has the same measurement accuracy. When there are errors in the measurement data, this method can still ensure the stability and accuracy of the measurement results.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89895722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusing point cloud with image for object detection using convolutional neural networks","authors":"Zhang Jiesong, Huang Yingping, Zhang Rui","doi":"10.12086/OEE.2021.200325","DOIUrl":"https://doi.org/10.12086/OEE.2021.200325","url":null,"abstract":"Addressing on the issues like varying object scale, complicated illumination conditions, and lack of reliable distance information in driverless applications, this paper proposes a multi-modal fusion method for object detection by using convolutional neural networks. The depth map is generated by mapping LiDAR point cloud onto the image plane and taken as input data together with the RGB image. The input data is also processed by the sliding window to reduce information loss. Two feature extracting networks are used to extract features of the image and the depth map respectively. The generated feature maps are fused through a connection layer. The objects are detected by processing the fused feature map through position regression and object classification. Non-maximal suppression is used to optimize the detection results. The experimental results on the KITTI dataset show that the proposed method is robust in various illumination conditions and especially effective on detecting small objects. Compared with other methods, the proposed method exhibits integrated advantages in terms of detection accuracy and speed.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85872934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereoscopic zoom for visual optimization based on grid deformation","authors":"Zhou Shen, Chai Xiongli, Shao Feng","doi":"10.12086/OEE.2021.200186","DOIUrl":"https://doi.org/10.12086/OEE.2021.200186","url":null,"abstract":"Stereoscopic image zoom optimization is a popular basic research problem in the field of image processing and computer vision in recent years. The zoom visual enhancement technology of 3D images has attracted more and more attention. To this end, this paper proposes a method of stereoscopic zoom vision optimization based on grid deformation from the model of camera zoom shooting, and strives to improve the experience of 3D stereo-scopic vision. Firstly, use the digital zoom method to simulate the camera model to properly zoom in on the target area, and then establish the mapping relationship between the reference image and the target image according to the camera zoom distance. Secondly, extract the foreground target object and use the modified just noticeable depth difference (JNDiD) model to guide the adaptive depth adjustment of the target object. Finally, combined with the seven grid-optimized energy terms designed in this paper, the image grid is optimized to improve the visual perception of the target object and ensure a good visual experience for the entire stereoscopic image. Compared with the existing digital zoom method, the proposed method has better effects on the size control of the image target object and the depth adjustment of the target object.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85909004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fan Songru, Fan-chao Meng, Chen Donghui, Zhao Qing
{"title":"Micro/Nano profile measurement by structured illumination microscopy utilizing time-domain phase-shift technique","authors":"Fan Songru, Fan-chao Meng, Chen Donghui, Zhao Qing","doi":"10.12086/OEE.2021.200430","DOIUrl":"https://doi.org/10.12086/OEE.2021.200430","url":null,"abstract":"Aiming at the technical difficulties in the rapid detection and reconstruction of three-dimensional micro-nano devices that are difficult to achieve both high precision and high speed, this paper proposes a structured light detection method based on time-domain phase shift technology. The measured light is modulated by a spatial light modulator, and the time-domain phase shift technology is further employed to realize the detection and reconstruction of three-dimensional micro-nano devices. Compared with the traditional structured light detection method, this technology uses the spatial light modulator to measure the phase shift while the sample is scanned axially, so as to ensure the measurement accuracy and improve the measurement efficiency. By analyzing the measurement data, this method can quickly realize three-dimensional shape detection and reconstruction, and the measurement accuracy can be better than 10 nm.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83951181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Rongliang, Li Yun, Lin Wumei, Z. Shuai, Qing Jianhong, Tang Linfeng
{"title":"Research on the calculation method of the ultra-precision turning trajectory of large-vector high-convex cylinders","authors":"Lei Rongliang, Li Yun, Lin Wumei, Z. Shuai, Qing Jianhong, Tang Linfeng","doi":"10.12086/OEE.2021.200192","DOIUrl":"https://doi.org/10.12086/OEE.2021.200192","url":null,"abstract":"Array microstructure optical elements are widely used in various beam homogenization occasions, but conventional processing methods cannot meet the accuracy requirements of large-sagittal convex cylindrical arrays. In this paper, the ultra-precision turning forming method is used to analyze the main factors affecting diamond turning, the sequential search method and the binary search method are designed to find the turning track, and the advantages and disadvantages of the two methods are compared. Furthermore, the binary search method is successfully found by combining the Matlab software turning trajectory and the numerical control program. As proof-of-concept demonstrations, turning experiments are carried on an ultra-precision lathe, and a large-vector high-array microstructure with a surface profile error of 135 nm is obtained. It proves that the force binary search method can accurately obtain the turning trajectory, and this method can be applied to both spherical and aspherical contours, showing important engineering application value.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74019367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Yanjun, Gao Haichuan, Zhang Longtu, Liu Qiang, Fu Xinghu
{"title":"Embedded gold-plated fiber Bragg grating temperature and stress sensors encapsulated in capillary copper tube","authors":"Z. Yanjun, Gao Haichuan, Zhang Longtu, Liu Qiang, Fu Xinghu","doi":"10.12086/OEE.2021.200195","DOIUrl":"https://doi.org/10.12086/OEE.2021.200195","url":null,"abstract":"In order to realize the non-destructive and real-time dynamic stress monitoring method of the construction machinery surface in complex and harsh environments, a fiber Bragg grating (FBG) stress sensor packaging method based on magnetron sputtering technology is proposed. Two packaging methods of complete embedding (the capillary copper tube embedded in the entire grating area) and two sides embedding (capillary copper tube nested at both ends of the grating area) are studied. The sensitization effect of the sensor is analyzed from the perspective of theory and finite element, and the results are consistent. The physical sensors are made, and temperature, stress, and comparison experiments are carried out. Simulation and experiment show that the FBG sensor improves the sensitivity by about 7.5% under this model. The temperature experiment shows that the temperature feedback correlation coefficient R2 of the second package structure reaches 0.99948, which shows good linearity in the range of 30 ℃80 ℃; the stress experiment correlation coefficient R2 also reaches 0.99924, and the sensitivity is 6.14 pm/MPa. The accuracy of demodulation system reaches 0.05 MPa, it can demodulate stress quickly and accurately. Comparative experiments show that the monitoring system composed of grating demodulator has higher accuracy than the monitoring system composed of strain gauges, and maximum deviation value smaller 59.8%. The packaging structure of metallization method of embedded capillary copper tube combined with organic glue fixed is simple, high sensitivity, and precision, can meet the needs of large-scale construction machinery surface non-destructive real-time health monitoring.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87708879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Jinlong, Jia Hongzhi, Jin Tao, Xiang Huazhong, Z. Dawei, Zhuang Song-lin
{"title":"Error model analysis and correlation correction of Hartmann's focimeter","authors":"Yang Jinlong, Jia Hongzhi, Jin Tao, Xiang Huazhong, Z. Dawei, Zhuang Song-lin","doi":"10.12086/OEE.2021.200238","DOIUrl":"https://doi.org/10.12086/OEE.2021.200238","url":null,"abstract":"In order to reduce the error and improve the measurement accuracy, a more detailed error model is established for the Hartmann method of focal power measurement in this paper. It focuses on the analysis of several problems that cause the error of refraction problems, including the dispersion error of the light source, the inaccurate of the photodetector’s central positing, the tilt of lens, misalignment between incidence axis and main axis of lens, and the incident light and the lens are not perpendicular. At last, it is concluded that the inaccuracy of the center extraction on the photodetector will cause a large error to the final result. For all these reasons, a method of dual bilinear interpolation combined with a fitting method to find the centroid is proposed, proving its effectiveness and accuracy.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83881181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research progress of computational microspectrometer based on speckle inspection","authors":"Zheng Qilin, Wen Long, Chengwei Qin","doi":"10.12086/OEE.2021.200183","DOIUrl":"https://doi.org/10.12086/OEE.2021.200183","url":null,"abstract":"Fast, accurate and nondestructive spectral analysis technique is important and widely used in the fields of scientific research, information, biomedical, pharmaceutical detection, agriculture, environment, and security. However, the existing spectroscopic analysis equipments are usually bulky and complex, which are difficult to adapt to portable application scenarios such as on-site rapid detection, light-load platform, etc. In recent years, miniature spectroscopic detection technology and equipment have received extensive attention, and have been rapidly developed, with significant advantages in size, weight, and power consumption. In particular, the computational spectral analysis technology based on the speckle detection can obtain high-precision spectral information by recording and analyzing the speckle pattern formed by the scattering element on the measured light. This paper will first introduce the related technical principles and technological developments, then analyze the existing techniques including the advantages and disadvantages, and finally discuss and summarize the future development direction and application prospects.","PeriodicalId":39552,"journal":{"name":"Guangdian Gongcheng/Opto-Electronic Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75195736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}