工业应用中 CMOS 灰度相机的图像拼接方法

IF 4.6 2区 物理与天体物理 Q1 OPTICS
Qi Liu , Ju Huo , Xiyu Tang , Muyao Xue
{"title":"工业应用中 CMOS 灰度相机的图像拼接方法","authors":"Qi Liu ,&nbsp;Ju Huo ,&nbsp;Xiyu Tang ,&nbsp;Muyao Xue","doi":"10.1016/j.optlastec.2024.111874","DOIUrl":null,"url":null,"abstract":"<div><div>To address the limited field of view (FOV) of CMOS grayscale cameras, complex lighting conditions, and the scarcity of image features in industrial applications, a novel image stitching method is proposed for CMOS grayscale cameras operating under varying lighting conditions. This method broadens the camera’s FOV while preserving the interpretability of image features, thereby enhancing the robustness and generalizability of image stitching across diverse lighting environments and feature-sparse settings. In the feature extraction phase, a hybrid deep feature extraction network is designed. By employing a deep learning-based approach, the network ensures the extraction of a substantial quantity of features. Building on this foundation, a method for line feature selection and reconstruction is developed to refine feature-matching accuracy, which increases the number of matching lines in extreme lighting and feature-scarce situations, and enriches the image features for subsequent stitching processes. In the subsequent image transformation phase, planar feature constraints are introduced; matching feature points and lines are used to generate planar features, addressing alterations in the collective shape of planes that are common in industrial image stitching. The paper concludes by presenting quantitative evaluation metrics for planar feature-based stitching. Experimental results validate the effectiveness and feasibility of the proposed method for image stitching of CMOS grayscale cameras under varied lighting conditions and in feature-deficient industrial settings, offering a viable solution to the challenges posed by the limited imaging FOV in industrial applications.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"181 ","pages":"Article 111874"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image stitching method for CMOS grayscale cameras in industrial applications\",\"authors\":\"Qi Liu ,&nbsp;Ju Huo ,&nbsp;Xiyu Tang ,&nbsp;Muyao Xue\",\"doi\":\"10.1016/j.optlastec.2024.111874\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To address the limited field of view (FOV) of CMOS grayscale cameras, complex lighting conditions, and the scarcity of image features in industrial applications, a novel image stitching method is proposed for CMOS grayscale cameras operating under varying lighting conditions. This method broadens the camera’s FOV while preserving the interpretability of image features, thereby enhancing the robustness and generalizability of image stitching across diverse lighting environments and feature-sparse settings. In the feature extraction phase, a hybrid deep feature extraction network is designed. By employing a deep learning-based approach, the network ensures the extraction of a substantial quantity of features. Building on this foundation, a method for line feature selection and reconstruction is developed to refine feature-matching accuracy, which increases the number of matching lines in extreme lighting and feature-scarce situations, and enriches the image features for subsequent stitching processes. In the subsequent image transformation phase, planar feature constraints are introduced; matching feature points and lines are used to generate planar features, addressing alterations in the collective shape of planes that are common in industrial image stitching. The paper concludes by presenting quantitative evaluation metrics for planar feature-based stitching. Experimental results validate the effectiveness and feasibility of the proposed method for image stitching of CMOS grayscale cameras under varied lighting conditions and in feature-deficient industrial settings, offering a viable solution to the challenges posed by the limited imaging FOV in industrial applications.</div></div>\",\"PeriodicalId\":19511,\"journal\":{\"name\":\"Optics and Laser Technology\",\"volume\":\"181 \",\"pages\":\"Article 111874\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Laser Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S003039922401332X\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S003039922401332X","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

针对 CMOS 灰度相机有限的视场(FOV)、复杂的照明条件以及工业应用中图像特征稀少的问题,提出了一种适用于在不同照明条件下工作的 CMOS 灰度相机的新型图像拼接方法。该方法在保留图像特征可解释性的同时,扩大了相机的视场角,从而增强了图像拼接在不同照明环境和特征稀缺设置下的鲁棒性和通用性。在特征提取阶段,设计了一个混合深度特征提取网络。通过采用基于深度学习的方法,该网络可确保提取大量特征。在此基础上,开发了一种线特征选择和重构方法,以提高特征匹配的准确性,从而在极端光照和特征稀少的情况下增加匹配线的数量,并为后续拼接过程丰富图像特征。在随后的图像转换阶段,引入了平面特征约束;匹配的特征点和线用于生成平面特征,解决了工业图像拼接中常见的平面集合形状的改变问题。本文最后介绍了基于平面特征的拼接的量化评估指标。实验结果验证了所提出的方法在不同光照条件和缺乏特征的工业环境下对 CMOS 灰度相机进行图像拼接的有效性和可行性,为工业应用中有限的成像视场所带来的挑战提供了可行的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Image stitching method for CMOS grayscale cameras in industrial applications
To address the limited field of view (FOV) of CMOS grayscale cameras, complex lighting conditions, and the scarcity of image features in industrial applications, a novel image stitching method is proposed for CMOS grayscale cameras operating under varying lighting conditions. This method broadens the camera’s FOV while preserving the interpretability of image features, thereby enhancing the robustness and generalizability of image stitching across diverse lighting environments and feature-sparse settings. In the feature extraction phase, a hybrid deep feature extraction network is designed. By employing a deep learning-based approach, the network ensures the extraction of a substantial quantity of features. Building on this foundation, a method for line feature selection and reconstruction is developed to refine feature-matching accuracy, which increases the number of matching lines in extreme lighting and feature-scarce situations, and enriches the image features for subsequent stitching processes. In the subsequent image transformation phase, planar feature constraints are introduced; matching feature points and lines are used to generate planar features, addressing alterations in the collective shape of planes that are common in industrial image stitching. The paper concludes by presenting quantitative evaluation metrics for planar feature-based stitching. Experimental results validate the effectiveness and feasibility of the proposed method for image stitching of CMOS grayscale cameras under varied lighting conditions and in feature-deficient industrial settings, offering a viable solution to the challenges posed by the limited imaging FOV in industrial applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.50
自引率
10.00%
发文量
1060
审稿时长
3.4 months
期刊介绍: Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication. The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas: •development in all types of lasers •developments in optoelectronic devices and photonics •developments in new photonics and optical concepts •developments in conventional optics, optical instruments and components •techniques of optical metrology, including interferometry and optical fibre sensors •LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow •applications of lasers to materials processing, optical NDT display (including holography) and optical communication •research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume) •developments in optical computing and optical information processing •developments in new optical materials •developments in new optical characterization methods and techniques •developments in quantum optics •developments in light assisted micro and nanofabrication methods and techniques •developments in nanophotonics and biophotonics •developments in imaging processing and systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信