Fengrun Tang , Yonggang Li , Fan Mo , Chunhua Yang , Bei Sun
{"title":"用于工业回转窑运行状况识别的两阶段多源异构信息融合框架","authors":"Fengrun Tang , Yonggang Li , Fan Mo , Chunhua Yang , Bei Sun","doi":"10.1016/j.aei.2025.103251","DOIUrl":null,"url":null,"abstract":"<div><div>The operating condition identification plays an irreplaceable role for the low-carbon and high-efficiency operation of industrial rotary kilns. However, existing single-stage multisource heterogeneous information fusion methods lack a unified framework to simultaneously fuse the complementary properties among visible images, infrared images, and process data, thus limiting the condition recognition accuracy. Moreover, smoke and dust interference make it challenging to extract critical image features such as flame brightness and blast pipe position, increasing the difficulty of condition recognition. To this end, this paper proposes a two-stage multisource heterogeneous information fusion (TSMHIF) framework for operating condition identification of industrial rotary kilns. First, in the initial fusion stage, a condition-aware visible and infrared image fusion network (CAVIF) is designed to generate fused images containing complementary properties of source images. In this network, a self-developed novel industrial system is utilized to collect aligned visible-infrared images of industrial rotary kilns. Next, an interpretable feature engineering is constructed by incorporating extracted shallow features based on mechanism knowledge and mined deep features with an autoencoder, and the blast pipe position in the shallow features is quantified by a keypoint detection algorithm based on a cascaded pyramid network (CPN). Then, in the comprehensive fusion stage, a multiplication operation is employed to fuse multisource heterogeneous deep features from fused images and process data to recognize the operating conditions. Finally, a joint training strategy is developed to balance the image fusion and condition classification networks. The classification loss, i.e., condition-aware loss, guides the training of the visible-infrared image fusion network to improve the visual quality of the fused images. The industrial experiments show that our proposed method exhibits superior performance in terms of identification accuracy, condition prediction deviation, and visual quality of fused images compared to other competitors.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103251"},"PeriodicalIF":9.9000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A two-stage multisource heterogeneous information fusion framework for operating condition identification of industrial rotary kilns\",\"authors\":\"Fengrun Tang , Yonggang Li , Fan Mo , Chunhua Yang , Bei Sun\",\"doi\":\"10.1016/j.aei.2025.103251\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The operating condition identification plays an irreplaceable role for the low-carbon and high-efficiency operation of industrial rotary kilns. However, existing single-stage multisource heterogeneous information fusion methods lack a unified framework to simultaneously fuse the complementary properties among visible images, infrared images, and process data, thus limiting the condition recognition accuracy. Moreover, smoke and dust interference make it challenging to extract critical image features such as flame brightness and blast pipe position, increasing the difficulty of condition recognition. To this end, this paper proposes a two-stage multisource heterogeneous information fusion (TSMHIF) framework for operating condition identification of industrial rotary kilns. First, in the initial fusion stage, a condition-aware visible and infrared image fusion network (CAVIF) is designed to generate fused images containing complementary properties of source images. In this network, a self-developed novel industrial system is utilized to collect aligned visible-infrared images of industrial rotary kilns. Next, an interpretable feature engineering is constructed by incorporating extracted shallow features based on mechanism knowledge and mined deep features with an autoencoder, and the blast pipe position in the shallow features is quantified by a keypoint detection algorithm based on a cascaded pyramid network (CPN). Then, in the comprehensive fusion stage, a multiplication operation is employed to fuse multisource heterogeneous deep features from fused images and process data to recognize the operating conditions. Finally, a joint training strategy is developed to balance the image fusion and condition classification networks. The classification loss, i.e., condition-aware loss, guides the training of the visible-infrared image fusion network to improve the visual quality of the fused images. The industrial experiments show that our proposed method exhibits superior performance in terms of identification accuracy, condition prediction deviation, and visual quality of fused images compared to other competitors.</div></div>\",\"PeriodicalId\":50941,\"journal\":{\"name\":\"Advanced Engineering Informatics\",\"volume\":\"65 \",\"pages\":\"Article 103251\"},\"PeriodicalIF\":9.9000,\"publicationDate\":\"2025-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced Engineering Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1474034625001442\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034625001442","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A two-stage multisource heterogeneous information fusion framework for operating condition identification of industrial rotary kilns
The operating condition identification plays an irreplaceable role for the low-carbon and high-efficiency operation of industrial rotary kilns. However, existing single-stage multisource heterogeneous information fusion methods lack a unified framework to simultaneously fuse the complementary properties among visible images, infrared images, and process data, thus limiting the condition recognition accuracy. Moreover, smoke and dust interference make it challenging to extract critical image features such as flame brightness and blast pipe position, increasing the difficulty of condition recognition. To this end, this paper proposes a two-stage multisource heterogeneous information fusion (TSMHIF) framework for operating condition identification of industrial rotary kilns. First, in the initial fusion stage, a condition-aware visible and infrared image fusion network (CAVIF) is designed to generate fused images containing complementary properties of source images. In this network, a self-developed novel industrial system is utilized to collect aligned visible-infrared images of industrial rotary kilns. Next, an interpretable feature engineering is constructed by incorporating extracted shallow features based on mechanism knowledge and mined deep features with an autoencoder, and the blast pipe position in the shallow features is quantified by a keypoint detection algorithm based on a cascaded pyramid network (CPN). Then, in the comprehensive fusion stage, a multiplication operation is employed to fuse multisource heterogeneous deep features from fused images and process data to recognize the operating conditions. Finally, a joint training strategy is developed to balance the image fusion and condition classification networks. The classification loss, i.e., condition-aware loss, guides the training of the visible-infrared image fusion network to improve the visual quality of the fused images. The industrial experiments show that our proposed method exhibits superior performance in terms of identification accuracy, condition prediction deviation, and visual quality of fused images compared to other competitors.
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.