AI and Machine Learning for Control Applications

IF 3.9 4区 计算机科学 Q2 AUTOMATION & CONTROL SYSTEMS
Jiusun Zeng, Shaohan Chen, Xiaoyu Zhang, Chuanhou Gao
{"title":"AI and Machine Learning for Control Applications","authors":"Jiusun Zeng,&nbsp;Shaohan Chen,&nbsp;Xiaoyu Zhang,&nbsp;Chuanhou Gao","doi":"10.1002/acs.4026","DOIUrl":null,"url":null,"abstract":"<p>The rapid advancement of artificial intelligence (AI) and machine learning technologies has fundamentally changed the traditional paradigm of control engineering. The focus of this special issue was to inspire people to discuss how AI and machine learning techniques can be used to enhance control applications in a wide range of fields, such as industrial process monitoring and fault diagnosis, optimal process design and control, deep generative model-based target recognition, and so forth. The varieties of methodologies and application studies within this special issue fully revealed the potential and necessity to further promote control-oriented AI and machine learning techniques. It is believed that this subject will continue to flourish and become one of the centerpieces of control research communities.</p><p>Among the papers accepted in the special issue, the first element to emerge is the development of AI and machine learning techniques for industrial process monitoring and anomaly localization [<span>1-3</span>]. Modern industrial processes often exhibit complicated characteristics of time-varying, multi-unit collaboration, multi-rate measurements, and significant process noises. There is an urgent need to understand and handle these characteristics. People within the study by Wu et al. [<span>3</span>] developed an adaptive spatiotemporal decouple graph convolution network to deal with the time-varying characteristics of large-scale process. The adaptive spatiotemporal graph is capable of incorporating prior knowledge and better reflecting the dynamic relationships among process variables. The proposed feature redundancy reduction scheme can simplify the graph structure and results in a more interpretable model. The enhanced fault detection performance revealed the potential of the adaptive graph neural network in industrial process monitoring. A further research issue is the multi-unit collaboration and multi-rate measurements in industrial processes. The work of Dong et al. [<span>1</span>] introduced a subsystem decomposition method and the multi-rate partial least squares, which showed promising performance in identifying process faults. In handling process noises, Jia et al. [<span>2</span>] introduced a slow feature-constrained decomposition autoencoder for anomaly detection isolation in industrial processes, which reduced the high-frequency noise and translated into better fault detection performance and isolation accuracy.</p><p>The second element discussed by the papers within this special issue is fault diagnosis and performance degradation prediction of rotating machinery and fuel cell stack [<span>4-10</span>]. Despite the numerous research progress made in fault diagnosis of rotating machinery in recent years, there is still a lack of effective solution to address issues like domain drift and unknown faults, data imbalance, strong noise, and so forth. Lin et al. [<span>4</span>] introduced a few-shot learning-based unknown recognition and classification method to deal with domain drift and unknown faults. The domain drift problem is handled by incorporating data scaling using Min-Max scaling, so that the drift in the vibration data can be dealt with without changing the source data distribution. Other issues like irregular sampling intervals are also considered. The work by Lu [<span>5</span>] focuses on the imbalanced data problem, which involves the multi-scale convolution neural networks and transformer. Wei et al. [<span>6</span>] developed a graph convolution network-based framework to deal with strongly noisy environments. The work of Zhang et al. [<span>7-9</span>] develops a belief-rule-based (BRB) technique for machinery fault diagnosis. The BRB method involves a two-stage feature extraction procedure using complex network and principal component analysis, which improved the separability of fault features. Another important research issue for machinery products is degradation prediction. Zhou et al. [<span>10</span>] developed a remaining useful life predictive method based on the adaptive continuous deep belief networks and improved kernel extreme learning machine. The work of Zhou et al. [<span>10</span>] involves two-stage prediction procedures, with feature extraction using deep belief networks being the first stage and prediction using kernel extreme learning machine being the second stage. On the other hand, the work of Zhang et al. [<span>7-9</span>] focuses on the multi-step performance degradation prediction problem of the proton-exchange membrane fuel cell stack. By incorporating the 1D convolution layer and the interactive learning mechanism of CatBoost, multi-step prediction can be achieved.</p><p>The third element of the special issue involves the incorporation of AI and machine learning methods with control problems [<span>7-9, 11-13</span>], covering control problems like robot control, iterative learning control, and disturbance compensation control. The work by Zhang et al. [<span>7-9</span>] introduces a conditional adversarial motion priors method based on reinforcement learning for humanoid robot control, which can be used to control straight-legged walking. The work by Aarnoudse and Oomen [<span>11</span>] proposed a data-driven MIMO iterative learning control method, which uses random learning in the form of unbiased gradient estimates. The convergence speed of the random learning-based method is further verified in an industrial printing process. Finally, the disturbance compensation control problem for discrete-time systems using reinforcement learning is discussed in Li et al. [<span>12, 13</span>], which used a new off-policy Q-learning algorithm to update the state feedback controller and compensator parameters.</p><p>The fourth element in this special issue covers the problems of system identification, neural operator approximation of partial differential equations (PDE) and pump scheduling [<span>12-15</span>]. Parameter identification of the Hammerstein system is an important problem in system identification. The work by Li et al. [<span>12, 13</span>] applies the neural fuzzy model and ARMAX model to decouple the Hammerstein system and uses combined signals to identify parameters in the system. In Lv et al. [<span>14</span>], a neural operator learning method is applied to accelerate the control design of cascaded parabolic PDEs, with the nonlinear operators approximated by the deep neural network of DeepONet. In Shao et al. [<span>15</span>], a deep reinforcement learning scheme is designed for pump scheduling of large-scale multiproduct pipelines, which is solved using the enhanced proximal policy optimization algorithm.</p><p>It should be noted that this special issue only covers a small fraction of the potential applications of artificial and machine learning in control engineering. We firmly believe that more and more promising control applications of AI and machine learning will be made in the future.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":50347,"journal":{"name":"International Journal of Adaptive Control and Signal Processing","volume":"39 7","pages":"1362-1363"},"PeriodicalIF":3.9000,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acs.4026","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Adaptive Control and Signal Processing","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/acs.4026","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid advancement of artificial intelligence (AI) and machine learning technologies has fundamentally changed the traditional paradigm of control engineering. The focus of this special issue was to inspire people to discuss how AI and machine learning techniques can be used to enhance control applications in a wide range of fields, such as industrial process monitoring and fault diagnosis, optimal process design and control, deep generative model-based target recognition, and so forth. The varieties of methodologies and application studies within this special issue fully revealed the potential and necessity to further promote control-oriented AI and machine learning techniques. It is believed that this subject will continue to flourish and become one of the centerpieces of control research communities.

Among the papers accepted in the special issue, the first element to emerge is the development of AI and machine learning techniques for industrial process monitoring and anomaly localization [1-3]. Modern industrial processes often exhibit complicated characteristics of time-varying, multi-unit collaboration, multi-rate measurements, and significant process noises. There is an urgent need to understand and handle these characteristics. People within the study by Wu et al. [3] developed an adaptive spatiotemporal decouple graph convolution network to deal with the time-varying characteristics of large-scale process. The adaptive spatiotemporal graph is capable of incorporating prior knowledge and better reflecting the dynamic relationships among process variables. The proposed feature redundancy reduction scheme can simplify the graph structure and results in a more interpretable model. The enhanced fault detection performance revealed the potential of the adaptive graph neural network in industrial process monitoring. A further research issue is the multi-unit collaboration and multi-rate measurements in industrial processes. The work of Dong et al. [1] introduced a subsystem decomposition method and the multi-rate partial least squares, which showed promising performance in identifying process faults. In handling process noises, Jia et al. [2] introduced a slow feature-constrained decomposition autoencoder for anomaly detection isolation in industrial processes, which reduced the high-frequency noise and translated into better fault detection performance and isolation accuracy.

The second element discussed by the papers within this special issue is fault diagnosis and performance degradation prediction of rotating machinery and fuel cell stack [4-10]. Despite the numerous research progress made in fault diagnosis of rotating machinery in recent years, there is still a lack of effective solution to address issues like domain drift and unknown faults, data imbalance, strong noise, and so forth. Lin et al. [4] introduced a few-shot learning-based unknown recognition and classification method to deal with domain drift and unknown faults. The domain drift problem is handled by incorporating data scaling using Min-Max scaling, so that the drift in the vibration data can be dealt with without changing the source data distribution. Other issues like irregular sampling intervals are also considered. The work by Lu [5] focuses on the imbalanced data problem, which involves the multi-scale convolution neural networks and transformer. Wei et al. [6] developed a graph convolution network-based framework to deal with strongly noisy environments. The work of Zhang et al. [7-9] develops a belief-rule-based (BRB) technique for machinery fault diagnosis. The BRB method involves a two-stage feature extraction procedure using complex network and principal component analysis, which improved the separability of fault features. Another important research issue for machinery products is degradation prediction. Zhou et al. [10] developed a remaining useful life predictive method based on the adaptive continuous deep belief networks and improved kernel extreme learning machine. The work of Zhou et al. [10] involves two-stage prediction procedures, with feature extraction using deep belief networks being the first stage and prediction using kernel extreme learning machine being the second stage. On the other hand, the work of Zhang et al. [7-9] focuses on the multi-step performance degradation prediction problem of the proton-exchange membrane fuel cell stack. By incorporating the 1D convolution layer and the interactive learning mechanism of CatBoost, multi-step prediction can be achieved.

The third element of the special issue involves the incorporation of AI and machine learning methods with control problems [7-9, 11-13], covering control problems like robot control, iterative learning control, and disturbance compensation control. The work by Zhang et al. [7-9] introduces a conditional adversarial motion priors method based on reinforcement learning for humanoid robot control, which can be used to control straight-legged walking. The work by Aarnoudse and Oomen [11] proposed a data-driven MIMO iterative learning control method, which uses random learning in the form of unbiased gradient estimates. The convergence speed of the random learning-based method is further verified in an industrial printing process. Finally, the disturbance compensation control problem for discrete-time systems using reinforcement learning is discussed in Li et al. [12, 13], which used a new off-policy Q-learning algorithm to update the state feedback controller and compensator parameters.

The fourth element in this special issue covers the problems of system identification, neural operator approximation of partial differential equations (PDE) and pump scheduling [12-15]. Parameter identification of the Hammerstein system is an important problem in system identification. The work by Li et al. [12, 13] applies the neural fuzzy model and ARMAX model to decouple the Hammerstein system and uses combined signals to identify parameters in the system. In Lv et al. [14], a neural operator learning method is applied to accelerate the control design of cascaded parabolic PDEs, with the nonlinear operators approximated by the deep neural network of DeepONet. In Shao et al. [15], a deep reinforcement learning scheme is designed for pump scheduling of large-scale multiproduct pipelines, which is solved using the enhanced proximal policy optimization algorithm.

It should be noted that this special issue only covers a small fraction of the potential applications of artificial and machine learning in control engineering. We firmly believe that more and more promising control applications of AI and machine learning will be made in the future.

The authors declare no conflicts of interest.

控制应用中的人工智能和机器学习
人工智能(AI)和机器学习技术的快速发展从根本上改变了控制工程的传统范式。这期特刊的重点是激发人们讨论如何使用人工智能和机器学习技术来增强控制在广泛领域的应用,如工业过程监测和故障诊断,最优过程设计和控制,基于深度生成模型的目标识别等。本期特刊中的各种方法和应用研究充分揭示了进一步推广面向控制的人工智能和机器学习技术的潜力和必要性。相信这一主题将继续蓬勃发展,并成为控制研究社区的核心之一。在特刊接受的论文中,第一个出现的元素是用于工业过程监控和异常定位的人工智能和机器学习技术的发展[1-3]。现代工业过程往往表现出时变、多单位协作、多速率测量和显著过程噪声的复杂特征。我们迫切需要了解和处理这些特征。Wu等人([3])的研究人员开发了一种自适应时空解耦图卷积网络来处理大尺度过程的时变特性。该自适应时空图能够吸收先验知识,更好地反映过程变量之间的动态关系。所提出的特征冗余削减方案可以简化图的结构,得到更易于解释的模型。增强的故障检测性能显示了自适应图神经网络在工业过程监控中的潜力。工业过程中的多单元协作和多速率测量是进一步研究的问题。Dong et al.[1]的工作引入了子系统分解方法和多速率偏最小二乘法,在过程故障识别方面表现出良好的性能。在处理过程噪声方面,Jia等人[2]引入了一种用于工业过程异常检测隔离的慢速特征约束分解自编码器,降低了高频噪声,提高了故障检测性能和隔离精度。本特刊中论文讨论的第二个要素是旋转机械和燃料电池堆的故障诊断和性能退化预测[4-10]。尽管近年来在旋转机械故障诊断方面的研究取得了许多进展,但对于领域漂移和未知故障、数据不平衡、强噪声等问题仍然缺乏有效的解决方案。Lin等人[4]提出了一种基于少采样学习的未知识别分类方法来处理域漂移和未知故障。采用最小-最大尺度法结合数据尺度来处理域漂移问题,从而在不改变源数据分布的情况下处理振动数据中的漂移问题。另外还考虑了不规则采样间隔等问题。Lu[5]的工作重点是研究数据不平衡问题,涉及到多尺度卷积神经网络和变压器。Wei等人开发了一个基于图卷积网络的框架来处理强噪声环境。Zhang等人[7-9]开发了一种基于信念规则(BRB)的机械故障诊断技术。该方法采用复杂网络和主成分分析相结合的两阶段特征提取方法,提高了故障特征的可分性。机械产品降解预测是机械产品研究的另一个重要问题。Zhou等人开发了一种基于自适应连续深度信念网络和改进核极限学习机的剩余使用寿命预测方法。Zhou等人的工作涉及两阶段的预测过程,使用深度信念网络的特征提取是第一阶段,使用核极限学习机的预测是第二阶段。另一方面,Zhang等人[7-9]的工作侧重于质子交换膜燃料电池堆的多步性能退化预测问题。通过结合一维卷积层和CatBoost的交互学习机制,可以实现多步预测。特刊的第三个要素涉及将人工智能和机器学习方法与控制问题相结合[7- 9,11 -13],涵盖机器人控制、迭代学习控制和干扰补偿控制等控制问题。Zhang等人的工作。 [7-9]介绍了一种基于强化学习的人形机器人控制条件对抗运动先验方法,可用于控制直腿行走。Aarnoudse和Oomen[11]提出了一种数据驱动的MIMO迭代学习控制方法,该方法以无偏梯度估计的形式使用随机学习。在工业印刷过程中进一步验证了基于随机学习的方法的收敛速度。最后,Li等人[12,13]讨论了使用强化学习的离散时间系统的扰动补偿控制问题,他们使用一种新的off-policy Q-learning算法来更新状态反馈控制器和补偿器参数。本期特刊的第四个要素涵盖了系统辨识、偏微分方程(PDE)的神经算子逼近和泵调度问题[12-15]。Hammerstein系统的参数辨识是系统辨识中的一个重要问题。Li等人[12,13]的工作采用神经模糊模型和ARMAX模型对Hammerstein系统进行解耦,并使用组合信号对系统中的参数进行识别。Lv等人[[14]]利用DeepONet的深度神经网络逼近非线性算子,采用神经算子学习方法加速级联抛物型偏微分方程的控制设计。Shao等人[[15]]为大规模多产品管道的泵调度设计了一种深度强化学习方案,使用增强的近端策略优化算法进行求解。值得注意的是,这个专题只涵盖了人工学习和机器学习在控制工程中的潜在应用的一小部分。我们坚信,未来AI和机器学习的控制应用将越来越有前景。作者声明无利益冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
16.10%
发文量
163
审稿时长
5 months
期刊介绍: The International Journal of Adaptive Control and Signal Processing is concerned with the design, synthesis and application of estimators or controllers where adaptive features are needed to cope with uncertainties.Papers on signal processing should also have some relevance to adaptive systems. The journal focus is on model based control design approaches rather than heuristic or rule based control design methods. All papers will be expected to include significant novel material. Both the theory and application of adaptive systems and system identification are areas of interest. Papers on applications can include problems in the implementation of algorithms for real time signal processing and control. The stability, convergence, robustness and numerical aspects of adaptive algorithms are also suitable topics. The related subjects of controller tuning, filtering, networks and switching theory are also of interest. Principal areas to be addressed include: Auto-Tuning, Self-Tuning and Model Reference Adaptive Controllers Nonlinear, Robust and Intelligent Adaptive Controllers Linear and Nonlinear Multivariable System Identification and Estimation Identification of Linear Parameter Varying, Distributed and Hybrid Systems Multiple Model Adaptive Control Adaptive Signal processing Theory and Algorithms Adaptation in Multi-Agent Systems Condition Monitoring Systems Fault Detection and Isolation Methods Fault Detection and Isolation Methods Fault-Tolerant Control (system supervision and diagnosis) Learning Systems and Adaptive Modelling Real Time Algorithms for Adaptive Signal Processing and Control Adaptive Signal Processing and Control Applications Adaptive Cloud Architectures and Networking Adaptive Mechanisms for Internet of Things Adaptive Sliding Mode Control.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信