Diego I. Nogueras-Rivera, Harry Bonilla-Alvarado, Julio A. Reyes-Munoz, Alex D. Santiago-Vargas, Luis M. Traverso-Aviles, Diego A. Aponte-Roa
{"title":"燃气轮机压缩机空气泄漏预测的深度学习算法标杆测试","authors":"Diego I. Nogueras-Rivera, Harry Bonilla-Alvarado, Julio A. Reyes-Munoz, Alex D. Santiago-Vargas, Luis M. Traverso-Aviles, Diego A. Aponte-Roa","doi":"10.1109/td43745.2022.9816854","DOIUrl":null,"url":null,"abstract":"In today's electrical grid, power plants are required to continuously monitor performance by combining sensors with advanced data analytics to provide reliable and efficient energy. This study compares the performance of various state-of-the-art deep learning (DL) algorithms for detecting anomalies on time-series data collected from multiple experiments conducted at the U.S. Department of Energy's National Energy Technology Laboratory (NETL) Hybrid Performance (Hyper) Facility; equipped with a 120-kW modified gas turbine system designed for hybrid configuration. The experiments consisted of a series of electrical load changes with an emulated compressor leak, which was reproduced by modulating the compressor bleed air valve. Nine different DL architectures were evaluated for a binary classification problem. The performance of the algorithms was compared by observing the average Matthews Correlation Coefficient (MCC) metric score and stability with the results over a series of tests. Each algorithm was trained to predict the label of the first future time-step and later the tenth time-step to understand how the algorithm's predictive performance was affected when predicting time steps further from the present time-step. Results suggest that, for predicting the first future time-step, the most feasible algorithms were the hybrid GRULSTM and parallel CNN-LSTM, with average MCC scores of approximately 71% and 70% respectively. Further, the most stable algorithms, while maintaining acceptable performance, were the sequential CNN-LSTM and Bi-LSTM with 69% and 68% MCC scores, respectively. On the other hand, with the tenth future time-step case, results suggest that the best algorithm was the TCN-FF, with an average MCC score of 75%. An alternative algorithm to explore, for this case, would be the sequential CNNLSTM with an average MCC score of 66% and great stability.","PeriodicalId":241987,"journal":{"name":"2022 IEEE/PES Transmission and Distribution Conference and Exposition (T&D)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Benchmarking of Deep Learning Algorithms for Compressor Air Leak Prediction in a Gas Turbine\",\"authors\":\"Diego I. Nogueras-Rivera, Harry Bonilla-Alvarado, Julio A. Reyes-Munoz, Alex D. Santiago-Vargas, Luis M. Traverso-Aviles, Diego A. Aponte-Roa\",\"doi\":\"10.1109/td43745.2022.9816854\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In today's electrical grid, power plants are required to continuously monitor performance by combining sensors with advanced data analytics to provide reliable and efficient energy. This study compares the performance of various state-of-the-art deep learning (DL) algorithms for detecting anomalies on time-series data collected from multiple experiments conducted at the U.S. Department of Energy's National Energy Technology Laboratory (NETL) Hybrid Performance (Hyper) Facility; equipped with a 120-kW modified gas turbine system designed for hybrid configuration. The experiments consisted of a series of electrical load changes with an emulated compressor leak, which was reproduced by modulating the compressor bleed air valve. Nine different DL architectures were evaluated for a binary classification problem. The performance of the algorithms was compared by observing the average Matthews Correlation Coefficient (MCC) metric score and stability with the results over a series of tests. Each algorithm was trained to predict the label of the first future time-step and later the tenth time-step to understand how the algorithm's predictive performance was affected when predicting time steps further from the present time-step. Results suggest that, for predicting the first future time-step, the most feasible algorithms were the hybrid GRULSTM and parallel CNN-LSTM, with average MCC scores of approximately 71% and 70% respectively. Further, the most stable algorithms, while maintaining acceptable performance, were the sequential CNN-LSTM and Bi-LSTM with 69% and 68% MCC scores, respectively. On the other hand, with the tenth future time-step case, results suggest that the best algorithm was the TCN-FF, with an average MCC score of 75%. An alternative algorithm to explore, for this case, would be the sequential CNNLSTM with an average MCC score of 66% and great stability.\",\"PeriodicalId\":241987,\"journal\":{\"name\":\"2022 IEEE/PES Transmission and Distribution Conference and Exposition (T&D)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/PES Transmission and Distribution Conference and Exposition (T&D)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/td43745.2022.9816854\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/PES Transmission and Distribution Conference and Exposition (T&D)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/td43745.2022.9816854","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Benchmarking of Deep Learning Algorithms for Compressor Air Leak Prediction in a Gas Turbine
In today's electrical grid, power plants are required to continuously monitor performance by combining sensors with advanced data analytics to provide reliable and efficient energy. This study compares the performance of various state-of-the-art deep learning (DL) algorithms for detecting anomalies on time-series data collected from multiple experiments conducted at the U.S. Department of Energy's National Energy Technology Laboratory (NETL) Hybrid Performance (Hyper) Facility; equipped with a 120-kW modified gas turbine system designed for hybrid configuration. The experiments consisted of a series of electrical load changes with an emulated compressor leak, which was reproduced by modulating the compressor bleed air valve. Nine different DL architectures were evaluated for a binary classification problem. The performance of the algorithms was compared by observing the average Matthews Correlation Coefficient (MCC) metric score and stability with the results over a series of tests. Each algorithm was trained to predict the label of the first future time-step and later the tenth time-step to understand how the algorithm's predictive performance was affected when predicting time steps further from the present time-step. Results suggest that, for predicting the first future time-step, the most feasible algorithms were the hybrid GRULSTM and parallel CNN-LSTM, with average MCC scores of approximately 71% and 70% respectively. Further, the most stable algorithms, while maintaining acceptable performance, were the sequential CNN-LSTM and Bi-LSTM with 69% and 68% MCC scores, respectively. On the other hand, with the tenth future time-step case, results suggest that the best algorithm was the TCN-FF, with an average MCC score of 75%. An alternative algorithm to explore, for this case, would be the sequential CNNLSTM with an average MCC score of 66% and great stability.