{"title":"一种具有多色特征的复杂现实朦胧场景自适应图像去雾网络","authors":"Zhiyu Lyu , Qi An , Yan Chen","doi":"10.1016/j.engappai.2025.112867","DOIUrl":null,"url":null,"abstract":"<div><div>Real-world hazy scenes can be broadly categorized into four types based on haze distribution and concentration: light homogeneous haze, dense homogeneous haze, light non-homogeneous haze, and dense non-homogeneous haze. However, many existing dehazing models are tailored for specific haze types, struggling to generalize effectively across these diverse conditions. Additionally, these models commonly extract feature information in the Red, Green, and Blue (RGB) color space, which makes it challenging to extract sufficient feature information in various hazy scenes. To address this issue, we propose an Adaptive Network (AdaNet) for multiple hazy scenes. The network includes two sub-networks: a color-guided feature extraction network and a scene reconstruction network. The color-guided feature extraction network is used to capture sufficient color, detail, and other feature information in both RGB and Luminance, Chroma Red, Chroma Blue (YCrCb) color spaces. For light and dense non-homogeneous hazy scenes, we enhance the scene reconstruction network with the Feature Selection Units (FSU) to filter out less relevant information, ensuring precise recovery of critical local details. Additionally, to tackle dehazing in light and dense homogeneous hazy scenes, we integrate the Feature Fusion Units (FFU) that combine multi-level features to improve overall feature utilization. Extensive experiments on multiple datasets with diverse hazy scenes demonstrate that our AdaNet outperforms state-of-the-art dehazing models, producing high-quality dehazed images in quadruple haze scenarios and ensuring reliability for high-level visual tasks in real-world hazy scenes.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"163 ","pages":"Article 112867"},"PeriodicalIF":8.0000,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Adaptive Image Dehazing Network with Multi-Color Feature for Complex Real-World Hazy Scenes\",\"authors\":\"Zhiyu Lyu , Qi An , Yan Chen\",\"doi\":\"10.1016/j.engappai.2025.112867\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Real-world hazy scenes can be broadly categorized into four types based on haze distribution and concentration: light homogeneous haze, dense homogeneous haze, light non-homogeneous haze, and dense non-homogeneous haze. However, many existing dehazing models are tailored for specific haze types, struggling to generalize effectively across these diverse conditions. Additionally, these models commonly extract feature information in the Red, Green, and Blue (RGB) color space, which makes it challenging to extract sufficient feature information in various hazy scenes. To address this issue, we propose an Adaptive Network (AdaNet) for multiple hazy scenes. The network includes two sub-networks: a color-guided feature extraction network and a scene reconstruction network. The color-guided feature extraction network is used to capture sufficient color, detail, and other feature information in both RGB and Luminance, Chroma Red, Chroma Blue (YCrCb) color spaces. For light and dense non-homogeneous hazy scenes, we enhance the scene reconstruction network with the Feature Selection Units (FSU) to filter out less relevant information, ensuring precise recovery of critical local details. Additionally, to tackle dehazing in light and dense homogeneous hazy scenes, we integrate the Feature Fusion Units (FFU) that combine multi-level features to improve overall feature utilization. Extensive experiments on multiple datasets with diverse hazy scenes demonstrate that our AdaNet outperforms state-of-the-art dehazing models, producing high-quality dehazed images in quadruple haze scenarios and ensuring reliability for high-level visual tasks in real-world hazy scenes.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"163 \",\"pages\":\"Article 112867\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625028982\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625028982","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
根据雾霾的分布和浓度,可以将现实世界的雾霾场景大致分为四种类型:轻度均匀雾霾、重度均匀雾霾、轻度非均匀雾霾和重度非均匀雾霾。然而,许多现有的除雾模型都是针对特定的雾霾类型量身定制的,难以有效地推广到这些不同的条件下。此外,这些模型通常在红绿蓝(RGB)色彩空间中提取特征信息,这使得在各种朦胧场景中提取足够的特征信息变得困难。为了解决这一问题,我们提出了一种针对多个朦胧场景的自适应网络(AdaNet)。该网络包括两个子网络:颜色引导特征提取网络和场景重建网络。颜色引导特征提取网络用于在RGB和Luminance, Chroma Red, Chroma Blue (YCrCb)颜色空间中捕获足够的颜色,细节和其他特征信息。对于光和密集的非均匀朦胧场景,我们使用特征选择单元(FSU)增强场景重建网络,过滤掉不相关的信息,确保精确恢复关键的局部细节。此外,为了解决光和密集均匀雾霾场景中的除雾问题,我们集成了结合多层次特征的特征融合单元(FFU),以提高整体特征利用率。在不同雾霾场景的多个数据集上进行的大量实验表明,我们的AdaNet优于最先进的去雾模型,在四重雾霾场景中产生高质量的去雾图像,并确保在真实的雾霾场景中高水平视觉任务的可靠性。
An Adaptive Image Dehazing Network with Multi-Color Feature for Complex Real-World Hazy Scenes
Real-world hazy scenes can be broadly categorized into four types based on haze distribution and concentration: light homogeneous haze, dense homogeneous haze, light non-homogeneous haze, and dense non-homogeneous haze. However, many existing dehazing models are tailored for specific haze types, struggling to generalize effectively across these diverse conditions. Additionally, these models commonly extract feature information in the Red, Green, and Blue (RGB) color space, which makes it challenging to extract sufficient feature information in various hazy scenes. To address this issue, we propose an Adaptive Network (AdaNet) for multiple hazy scenes. The network includes two sub-networks: a color-guided feature extraction network and a scene reconstruction network. The color-guided feature extraction network is used to capture sufficient color, detail, and other feature information in both RGB and Luminance, Chroma Red, Chroma Blue (YCrCb) color spaces. For light and dense non-homogeneous hazy scenes, we enhance the scene reconstruction network with the Feature Selection Units (FSU) to filter out less relevant information, ensuring precise recovery of critical local details. Additionally, to tackle dehazing in light and dense homogeneous hazy scenes, we integrate the Feature Fusion Units (FFU) that combine multi-level features to improve overall feature utilization. Extensive experiments on multiple datasets with diverse hazy scenes demonstrate that our AdaNet outperforms state-of-the-art dehazing models, producing high-quality dehazed images in quadruple haze scenarios and ensuring reliability for high-level visual tasks in real-world hazy scenes.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.