A deep learning framework integrating Transformer and LSTM architectures for pipeline corrosion rate forecasting

IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Jianxun Jiang , Xinli Wan , Feng Zhu , Duole Xiang , Ziyan Hu , Shuxing Mu
{"title":"A deep learning framework integrating Transformer and LSTM architectures for pipeline corrosion rate forecasting","authors":"Jianxun Jiang ,&nbsp;Xinli Wan ,&nbsp;Feng Zhu ,&nbsp;Duole Xiang ,&nbsp;Ziyan Hu ,&nbsp;Shuxing Mu","doi":"10.1016/j.compchemeng.2025.109365","DOIUrl":null,"url":null,"abstract":"<div><div>Accurately predicting the corrosion rate is crucial for ensuring the safe operation of buried pipelines. Currently, research on pipeline corrosion prediction is largely confined to static methods, which do not fully capture dynamic safety considerations. In contrast, machine learning techniques can more effectively process experimental data and comprehend its complex characteristics. Based on this, this paper proposes an interpretable Transformer-LSTM (Long Short-Term Memory) model for predicting the corrosion rate of buried pipelines. Its core innovation lies in modifying the Transformer architecture by replacing the decoder layer of the traditional Transformer model with a fully connected layer and substituting the original attention layer with an LSTM layer. This modification allows the model to utilize the storage units of LSTM to effectively store and update information within the sequence. Finally, two cases were combined for case verification. Taking Case 1 as an example, the research results indicate that, compared to the LSTM model, the Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE) of the Transformer LSTM model are reduced by 85.5 %, 89.8 %, and 83.2 %, respectively. In comparison to the Transformer model, the MAE, MAPE, and RMSE of the Transformer LSTM model decreased by 73.8 %, 80.5 %, and 68.6 %, respectively. Additionally, the SHapley Additive exPlanations (SHAP) method is employed to provide a global and intuitive explanation of the model, aiding in the understanding of the contribution of input features. These research findings will assist pipeline operators in better planning the operation and maintenance of pipelines.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109365"},"PeriodicalIF":3.9000,"publicationDate":"2025-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Chemical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0098135425003680","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Accurately predicting the corrosion rate is crucial for ensuring the safe operation of buried pipelines. Currently, research on pipeline corrosion prediction is largely confined to static methods, which do not fully capture dynamic safety considerations. In contrast, machine learning techniques can more effectively process experimental data and comprehend its complex characteristics. Based on this, this paper proposes an interpretable Transformer-LSTM (Long Short-Term Memory) model for predicting the corrosion rate of buried pipelines. Its core innovation lies in modifying the Transformer architecture by replacing the decoder layer of the traditional Transformer model with a fully connected layer and substituting the original attention layer with an LSTM layer. This modification allows the model to utilize the storage units of LSTM to effectively store and update information within the sequence. Finally, two cases were combined for case verification. Taking Case 1 as an example, the research results indicate that, compared to the LSTM model, the Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE) of the Transformer LSTM model are reduced by 85.5 %, 89.8 %, and 83.2 %, respectively. In comparison to the Transformer model, the MAE, MAPE, and RMSE of the Transformer LSTM model decreased by 73.8 %, 80.5 %, and 68.6 %, respectively. Additionally, the SHapley Additive exPlanations (SHAP) method is employed to provide a global and intuitive explanation of the model, aiding in the understanding of the contribution of input features. These research findings will assist pipeline operators in better planning the operation and maintenance of pipelines.
一种集成Transformer和LSTM架构的深度学习框架,用于管道腐蚀速率预测
准确预测腐蚀速率对于保证埋地管道的安全运行至关重要。目前,对管道腐蚀预测的研究主要局限于静态方法,没有充分体现动态安全考虑。相比之下,机器学习技术可以更有效地处理实验数据并理解其复杂的特征。在此基础上,提出了一种可解释的变压器- lstm (Long - Short-Term Memory)模型,用于预测埋地管道的腐蚀速率。其核心创新在于对Transformer架构进行修改,将传统Transformer模型的解码器层替换为全连接层,将原有的注意层替换为LSTM层。这种修改允许模型利用LSTM的存储单元有效地存储和更新序列中的信息。最后,结合两个案例进行案例验证。以Case 1为例,研究结果表明,与LSTM模型相比,Transformer LSTM模型的平均绝对误差(MAE)、平均绝对百分比误差(MAPE)和均方根误差(RMSE)分别减小了85.5%、89.8%和83.2%。与Transformer模型相比,Transformer LSTM模型的MAE、MAPE和RMSE分别降低了73.8%、80.5%和68.6%。此外,采用SHapley加性解释(SHAP)方法为模型提供全局和直观的解释,有助于理解输入特征的贡献。这些研究结果将有助于管道运营商更好地规划管道的运行和维护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Chemical Engineering
Computers & Chemical Engineering 工程技术-工程:化工
CiteScore
8.70
自引率
14.00%
发文量
374
审稿时长
70 days
期刊介绍: Computers & Chemical Engineering is primarily a journal of record for new developments in the application of computing and systems technology to chemical engineering problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信