基于 CIELAB 融合的生成式对抗网络,用于露天矿可靠除砂除尘

IF 4.2 2区 计算机科学 Q2 ROBOTICS
Xudong Li, Chong Liu, Yangyang Sun, Wujie Li, Jingmin Li
{"title":"基于 CIELAB 融合的生成式对抗网络,用于露天矿可靠除砂除尘","authors":"Xudong Li,&nbsp;Chong Liu,&nbsp;Yangyang Sun,&nbsp;Wujie Li,&nbsp;Jingmin Li","doi":"10.1002/rob.22387","DOIUrl":null,"url":null,"abstract":"<p>Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2832-2847"},"PeriodicalIF":4.2000,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A CIELAB fusion-based generative adversarial network for reliable sand–dust removal in open-pit mines\",\"authors\":\"Xudong Li,&nbsp;Chong Liu,&nbsp;Yangyang Sun,&nbsp;Wujie Li,&nbsp;Jingmin Li\",\"doi\":\"10.1002/rob.22387\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.</p>\",\"PeriodicalId\":192,\"journal\":{\"name\":\"Journal of Field Robotics\",\"volume\":\"41 8\",\"pages\":\"2832-2847\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Field Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/rob.22387\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Field Robotics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rob.22387","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

摘要

目前正在开发用于露天矿智能采矿的智能电铲。基于图像识别技术的复杂环境检测和目标识别是实现智能电铲操作的先决条件。然而,露天矿中存在大量沙尘,在数据采集过程中会导致环境能见度低和颜色偏移,从而产生低质量的图像。在沙尘环境中采集的环境感知图像会严重影响智能电铲的目标检测和场景分割能力。因此,开发一种有效的图像处理算法来解决这些问题并提高智能电铲的感知能力变得至关重要。目前,基于深度学习的方法在图像去毛刺方面取得了不错的效果,在图像去沙尘方面也有一定的相关性。然而,深度学习在很大程度上依赖于数据集,但现有的数据集主要集中在雾霾环境中,沙尘图像数据集存在很大缺口,尤其是露天采矿场景。另一个瓶颈是传统方法在去除图像中的沙尘时性能有限,如图像失真和模糊。针对上述问题,提出了一种基于大气物理模型和 CIELAB 色彩空间特征的沙尘图像数据生成方法。通过大气物理模型分析了沙尘对图像的影响机理,并将沙尘图像的形成分为模糊和色彩偏差两部分。我们研究了基于大气物理模型和 CIELAB 色彩空间的模糊和色彩偏差效应生成理论,并设计了两阶段沙尘图像生成方法。我们还在真实的采矿环境中构建了露天矿沙尘数据集。最后,本文以生成式对抗网络(GAN)为研究基础,重点研究了沙尘图像效果的形成机理。将 CIELAB 颜色特征与 GAN 的判别器融合,作为基本前提和附加约束,以提高判别效果。通过结合 CIELAB 色彩空间的三个特征成分并比较算法性能,确定了特征融合方案。结果表明,所提出的方法能很好地生成清晰逼真的图像,有助于提高重沙尘露天矿中目标检测和场景分割任务的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A CIELAB fusion-based generative adversarial network for reliable sand–dust removal in open-pit mines

Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Field Robotics
Journal of Field Robotics 工程技术-机器人学
CiteScore
15.00
自引率
3.60%
发文量
80
审稿时长
6 months
期刊介绍: The Journal of Field Robotics seeks to promote scholarly publications dealing with the fundamentals of robotics in unstructured and dynamic environments. The Journal focuses on experimental robotics and encourages publication of work that has both theoretical and practical significance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信