{"title":"ALDRAW: Algorithmic engineering representations","authors":"Abhinav Pandey, Vidit Gaur","doi":"10.1016/j.aei.2025.103362","DOIUrl":"10.1016/j.aei.2025.103362","url":null,"abstract":"<div><div>Engineering drawings have been the predominant representation of engineering information but have several deficiencies due to their graphical nature. This paper addresses these issues by proposing an algorithmic framework, ALDRAW, to represent engineering information and de-link design option qualification from representation. ALDRAW enhances engineering communication by enabling purposefulness, explainability, information scalability, domain abstraction, active collaboration, version control, knowledge transfer and machine learning in the representations. The framework has been successfully tested on real-world facility layout and other engineering problems, and compared with other proposed approaches in recent literature, demonstrating its potential to improve the engineering process through more effective and efficient information representation. A web application is also developed based on this framework using Django Python for real-world projects. Recommendations towards industry adoption and future research are also highlighted.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103362"},"PeriodicalIF":8.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziling Wang , Lai Zou , Junjie Zhang , Yilin Mu , Wenxi Wang , Jinhao Xiao
{"title":"Point-driven robot selective grinding method based on region growing for turbine blade","authors":"Ziling Wang , Lai Zou , Junjie Zhang , Yilin Mu , Wenxi Wang , Jinhao Xiao","doi":"10.1016/j.aei.2025.103325","DOIUrl":"10.1016/j.aei.2025.103325","url":null,"abstract":"<div><div>The complex geometric characteristics and the uneven allowance distribution of turbine blades restrict the grinding accuracy of robots. A novel point-driven robot selective grinding method based on region growing is proposed to enhance the surface accuracy of the turbine blade. First, this method calculates the curvature of every surface point among the turbine blade point clouds located at the slicing plane. Then, all surface points are segmented into intake edge points, exhaust edge points, convex points, and concave points. Moreover, the ideal normal grinding force (INGF) of every surface point at blade edges and profile is calculated based on the allowance distribution and material removal rate of belt grinding. INGF values, as the main characteristics of these surface points, are used in the voxel-based region growing to obtain multiple grinding regions in the blade surface, and their corresponding INGF values are calculated. Finally, the planned robotic grinding trajectories are modified based on the INGF values of these grinding regions. Robotic grinding experiments with the blade point clouds are conducted. The surface accuracy of the turbine blade with the proposed method is improved by 46.49% compared to that with the traditional grinding method.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103325"},"PeriodicalIF":8.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongya Mei , Qiaoting Tan , Yi Tan , Wen Yi , Siyu Luo
{"title":"An integrated optimization and visualization approach for construction site layout planning considering primary and reuse building materials","authors":"Zhongya Mei , Qiaoting Tan , Yi Tan , Wen Yi , Siyu Luo","doi":"10.1016/j.aei.2025.103314","DOIUrl":"10.1016/j.aei.2025.103314","url":null,"abstract":"<div><div>Construction Site Layout Planning (CSLP) facilitates cost reduction, productivity enhancement, and mitigation of safety risks across both on-site and off-site construction sectors. As an optimization challenge, it primarily focuses on determining the most suitable locations and dimensions for temporary facilities (TFs) designated for materials. However, the limited attention given to the reuse of materials poses obstacles to the practical application of optimization results. Moreover, the reliance on two-dimensional (2D) visualizations for layout presentation falls short of meeting practical demands. To address these issues, this study proposes an integrated approach that combines optimization and visualization for CSLP, taking into account both primary and reuse materials. Initially, the calculation methods for determining the on-site dimensions of TFs, transportation frequencies, and distances, considering material stacking patterns and inventory levels, transportation processes, and on-site obstacles are introduced. Subsequently, the CSLP problem is formulated as a mathematical model aimed at minimizing the total transportation cost. Furthermore, a heuristic algorithm, based on the greedy algorithm and identified available on-site space, is designed to solve this model. A comparative analysis with other widely-used <em>meta</em>-heuristic algorithms, such as ant colony optimization, genetic algorithms, and particle swarm optimization, demonstrates the superiority of the designed algorithm in solving the CSLP problem. Lastly, a Building Information Modeling (BIM)-based parametric modeling is employed to automatically and dynamically present the optimized results in a 3D format. The proposed approach is illustrated and validated through a case study conducted in Chongqing, China. The findings reveal that the proposed approach can efficiently and accurately produce 3D layouts for storage and processing TFs accommodating both primary and reused materials. Not only does this study enrich the existing literature on CSLP, but it also presents practical solutions for real-world planning.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103314"},"PeriodicalIF":8.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanghui Zhou , Chong Han , Chao Zhang , Yaguang Zhou , Keyan Zeng , Jiancong Liu , Jiacheng Li , Kai Ding , Felix T.S. Chan
{"title":"Interpretable knowledge recommendation for intelligent process planning with graph embedded deep reinforcement learning","authors":"Guanghui Zhou , Chong Han , Chao Zhang , Yaguang Zhou , Keyan Zeng , Jiancong Liu , Jiacheng Li , Kai Ding , Felix T.S. Chan","doi":"10.1016/j.aei.2025.103321","DOIUrl":"10.1016/j.aei.2025.103321","url":null,"abstract":"<div><div>In the context of Industry 4.0, knowledge recommendation serves as the basis for intelligent process planning. However, the limited interpretability of knowledge recommendation systems make it challenging for users to understand and trust the recommendation process. Consequently, this paper defines an interpretable knowledge recommendation process (iKRP) task that transforms the knowledge recommendation process into a sequential decision-making task through deep reinforcement learning (DRL). It then generates relational paths to the answers based on the topic entities within the knowledge graph. To improve the interpretability of the recommended process knowledge, the following research approaches are proposed: (1) a framework for recommending sequences of process decision knowledge; (2) a TransEx knowledge graph embedding model that integrates attention mechanisms and complex-valued embeddings, with the accuracy improvements of 5.56 % over baseline method; (3) a process knowledge recommendation network based on DRL through the asynchronous superior actor-critic algorithm to achieve interpretability; (4) enhanced interpretability of the recommended process knowledge via the presentation of clear decision paths. Finally, the validity and reliability of the proposed method are demonstrated through application cases, which achieve a final accuracy rate of 0.8148.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103321"},"PeriodicalIF":8.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-AI cooperative generative adversarial network (GAN) for quality predictions of small-batch product series","authors":"Chun-Hua Chien , Amy J.C. Trappey","doi":"10.1016/j.aei.2025.103327","DOIUrl":"10.1016/j.aei.2025.103327","url":null,"abstract":"<div><div>This paper emphasizes the importance and the novel methodology of predicting product quality for smart manufacturing, particularly for small-batched power equipment productions with demand-specified variations in the energy sector. Accurate evaluation and parameter adjustments are crucial for achieving the highest-quality results. To predict final product quality, even with a small sample size of a specific transformer type (and unique design specification), we proposed a novel method using a generative adversarial network (GAN) for model training and fine-tuning. This approach is crucial in the context of the digital transformation of complex industrial machinery industries. This research was undertaken with a prominent power transformer manufacturer and its supply chain collaborators. To train and validate the model, data were gathered from actual systems, utilizing the expertise of the company’s personnel. The dataset included critical power transformer metrics, including core loss values, which are crucial for accurate predictions. GAN generated realistic, high-quality samples that enhanced the training process and enhanced the model’s generalization capabilities, ultimately resulting in more accurate predictions. The experimental findings indicate that the proposed approach offers manufacturers a powerful tool for predicting the quality of complex, high-value, and highly specialized industrial products, ultimately leading to a reduction in production costs. Furthermore, in comparison to the models employed in previous studies within this research series, which include AdaBoost, ARIMA-AdaBoost, and LSTM-AdaBoost, the GAN model has been enhanced to address quality prediction models for other small-batch transformer productions. In the present study, the generator and discriminator components of the GAN effectively generated usable data to enhance the limited dataset, thereby mitigating the challenges associated with a small sample size. Consequently, the methodology presented in this study has the potential for broad application across diverse industrial manufacturing sectors, thereby mitigating the constraints associated with predicting product quality and substantially enhancing the accuracy of the models.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103327"},"PeriodicalIF":8.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SAD-VER: A Self-supervised, Diffusion probabilistic model-based data augmentation framework for Visual-stimulus EEG Recognition","authors":"Junjie Huang, Mingyang Li, Wanzhong Chen","doi":"10.1016/j.aei.2025.103298","DOIUrl":"10.1016/j.aei.2025.103298","url":null,"abstract":"<div><div>The decoding of EEG-based visual stimuli has become a major and important topic in the field of Brain–Computer Interfaces (BCI) research. However, there is a problem of EEG data scarcity in visual stimulus EEG decoding research, making it difficult to establish effective and stable deep learning models. Therefore, in this paper we propose a novel data augmentation framework: the Self-supervised, Adaptive variance Diffusion probabilistic model-based Visual-stimulus EEG Augmentation Framework (SAD-VER), for enhancing and recognizing visual stimulus EEG data. As the first to introduce diffusion model to EEG-based visual stimulus research, the generating process of SAD-VER is composed of a well-designed diffusion model to generate high-quality and diverse EEG samples. Additionally, this process is self-optimized with a Bayesian method-based hyperparameter optimizer to maximize the quality of the generated EEG samples in a self-supervised manner. A modified convolutional network is also utilized for quality analysis and decoding of augmented EEG. Experimental results demonstrate that the proposed SAD-VER can improve the decoding accuracy of existing models by generating high-quality EEG samples, and achieve the state-of-the-art performance in various visual stimulus EEG decoding tasks. Further analysis indicates that EEG generated by SAD-VER enhances the separability of features between different categories, and contributes to locating crucial brain region information. Code of this research is available at: <span><span>https://github.com/yellow006/SAD-VER</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103298"},"PeriodicalIF":8.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mechanism-data-driven control strategy for active suspension systems: Integrating deep reinforcement learning with differential geometry to enhance vehicle ride comfort","authors":"Cheng Wang, Guanyu Tao, Xiaoxian Cui, Quan Yao, Xinran Zhou, Konghui Guo","doi":"10.1016/j.aei.2025.103326","DOIUrl":"10.1016/j.aei.2025.103326","url":null,"abstract":"<div><div>Long-standing research on vehicle comfort optimization has centered on active and semi-active suspension control using experience-based or optimization-based algorithms. However, these methods often require substantial engineering resources and pose challenges in acquiring theoretical knowledge. The emergence of advanced Artificial Intelligence (AI), particularly data-driven approaches, has transformed how engineers tackle knowledge-intensive tasks like suspension control. Yet, the interpretability challenges of data-driven methods limit their widespread use in engineering. This study proposes a mechanism-data-driven active suspension control strategy that integrates Differential Geometry (DG) and Deep Reinforcement Learning (DRL) to achieve theoretical fusion of mechanism and data models. A DRL control architecture (DGRL) based on DG theory is introduced, enabling mechanism-level analysis of suspension control and dividing the control strategy into mechanism and data models. For the data model, a DRL optimal control framework is constructed, incorporating the Twin-Delayed Deep Deterministic policy (TD3) with an expert-guided soft-hard module (TD3-SH) and the Deterministic Experience Tracing (DET) mechanism. This effectively explores and utilizes the knowledge in massive data. Simulation results show that the DGRL strategy outperforms baseline algorithms such as Deep Deterministic Policy Gradient (DDPG), TD3, Linear Quadratic Regulator (LQR), Model Predictive Control (MPC), and TD3-SH by 75.8%, 65.5%, 77.5%, 56.3%, and 46.5%, respectively. In complex environments with varying road features and considering the domain randomization of the suspension system, the DGRL strategy can improve ride comfort by up to 85%, demonstrating its robustness and significant potential for widespread application in industrial and real-world scenarios.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103326"},"PeriodicalIF":8.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143799009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaoyang Liu , Duanrui Yang , Jun Ye , Hongjia Lu , Zhen Wang , Yang Zhao
{"title":"A real-time welding defect detection framework based on RT-DETR deep neural network","authors":"Gaoyang Liu , Duanrui Yang , Jun Ye , Hongjia Lu , Zhen Wang , Yang Zhao","doi":"10.1016/j.aei.2025.103318","DOIUrl":"10.1016/j.aei.2025.103318","url":null,"abstract":"<div><div>The quality of welds is critical to the safety and reliability of steel structure connections, underscoring the importance of accurate inspection during the welding process. To enhance inspection effectiveness, deep learning methods have gained popularity in weld defect detection for their ability to automatically learn and refine image features. However, the complex multi-stage training and inference process of these methods often fails to meet the requirements of real-time performance and accuracy. To address this problem, a framework based on the Real-Time DEtection TRansformer (RT-DETR) for deep learning-based welding defect detection is proposed. This framework improves the Transformer backbone by eliminating the most time-consuming non-maximum suppression (NMS) step, achieving real-time detection without sacrificing accuracy. A diverse welding dataset with 1,134 images from real-world manufacturing and construction environments was developed for model training and validation. In addition, three data enhancement algorithms were explored to enhance the model’s generalization ability. The model achieved detection accuracy scores of [email protected] at 0.996 and [email protected]:0.95 at 0.801, with a detection speed of 67 frames per second (FPS). Compared to the previous Faster R-CNN, SSD, YOLOv5, YOLOv11 and DETR models, the proposed RT-DETR model demonstrates superior efficiency and accuracy. The proposed framework was further validated in the on-site inspections of metal additive manufacturing, and the results confirmed that the RT-DETR-based model meets the stringent requirements for real-time inspection in metal additive manufacturing.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103318"},"PeriodicalIF":8.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaofeng Zheng , Guo-Qiang Li , Wei Ji , Shaojun Zhu
{"title":"Edge computing-oriented model optimization for synchronous acquisition of key physical parameters governing building collapses in fire","authors":"Xiaofeng Zheng , Guo-Qiang Li , Wei Ji , Shaojun Zhu","doi":"10.1016/j.aei.2025.103303","DOIUrl":"10.1016/j.aei.2025.103303","url":null,"abstract":"<div><div>In emergency scenarios such as building fires, swift and efficient prediction of structural collapses is paramount for effective rescue operations. This paper proposes an innovative edge computing-based framework designed to optimize models to capture the key physical parameters in real time for early warning of fire-induced building collapses, aiming to reconcile the challenge of minimizing model volume and operational costs without compromising accuracy. Our approach focuses on refining the model through a comprehensive optimization strategy. This includes an advanced sampling method that enhances accuracy by comparing datasets derived from normal and uniform distributions. Besides, thermocouples are strategically deployed, guided by explainable artificial intelligence, to significantly reduce unnecessary input features and associated costs. Additionally, the genetic algorithm is used to streamline the deep neural network by minimizing the number of trainable parameters. The culmination of these efforts results in a significantly more compact model. For predicting displacements at vertices, the model achieves a goodness of fit (R-squared value) ranging from 0.9 to 1 for 84 % of the test set, while reducing the total number of parameters by 78 %. Additionally, the model lowers hardware deployment costs and addresses the computational challenges of processing extensive feature sets. This paves the way for the deployment of lightweight, accurate, and cost-effective early-warning systems in emergency response contexts.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103303"},"PeriodicalIF":8.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengrun Huang , Xinming Qian , Pengliang Li , Xingyu Shen , Longfei Hou , Yuanzhi Li , Mengqi Yuan
{"title":"A novel method for leakage monitoring in Network-Level urban medium- and Low-Pressure natural gas pipelines combining information theory and Light Gradient Boosting","authors":"Zhengrun Huang , Xinming Qian , Pengliang Li , Xingyu Shen , Longfei Hou , Yuanzhi Li , Mengqi Yuan","doi":"10.1016/j.aei.2025.103309","DOIUrl":"10.1016/j.aei.2025.103309","url":null,"abstract":"<div><div>Leaks in urban medium- and low-pressure natural gas pipeline networks pose substantial risks and detection difficulties, compromising pipeline network reliability and urban safety. Little research has been conducted on leakage monitoring of network-level urban pipelines. This paper proposes a machine learning framework based on a dataset consisting of 141,236 samples collected over nearly five years. The framework real-time classifies the causes of anomalous signals (with leaks being one of the causes) collected by IoT terminals located near each section of pipelines within a large network at both the individual-sampling-point level and entire-incident level, thus enabling monitoring. Feature extraction is a crucial part of machine learning, relying solely on the sensor data and computational rules defined through diffusion laws and data analysis, requiring no prior information. The original feature set includes 360 features with physical significance. A novel iMICRS reduction method integrating information theory and rough set theory is developed to determine the optimal feature combination. Combining ten-fold cross-validation and Bayesian optimization, LightGBM achieves the highest ROC_AUC of 0.871 on a test set covering 9,885 sampling points. SHAP is used for prediction interpretation. The classification method for the entire incident based on the prediction results achieves a recall rate of 90% in diagnosing leaks (including multiple small leaks). This study provides an effective full-process engineering solution for leakage monitoring in urban medium- and low-pressure pipeline networks, based on actual operational data.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"65 ","pages":"Article 103309"},"PeriodicalIF":8.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}