{"title":"Multi-fidelity machine learning framework for life cycle assessment: a manufacturing case study on aluminum rolling","authors":"Muhammad Umar Farooq , Daniel Cooper","doi":"10.1016/j.procir.2024.12.014","DOIUrl":null,"url":null,"abstract":"<div><div>Manufacturing industries are increasingly focused on achieving sustainability targets, which has driven the development of environmental impact models often based on life cycle assessment (LCA) methods and databases. However, these databases tend to be too generic to ensure accurate modelling (e.g., using global or regional average impact values per unit of mass processed). To improve accuracy, companies can generate customized data inventories through experiments or simulations, but these approaches are typically costly, time-consuming, and may disrupt daily operations. This article introduces a partial physics-based, multi-fidelity machine learning approach to generate low-cost, environmental impact models tailored to specific manufacturing systems. The framework uses reduced-order, low-fidelity, physics-based models to capture the process dynamics, followed by transfer learning with small volumes of high-fidelity (e.g., experimental) data. This allows for accurate gate-to-gate environmental impact predictions without the need for extensive experimental campaigns. The framework is demonstrated on a lab-scale metal rolling mill for predicting power consumption in gate-to-gate assessments. A simple slab analysis metal forming model trains the base learner, and adaptive boosting is used for transfer learning on experimental data. The framework achieved superior performance, requiring 13% less experimental data than a standalone machine learning model of the same accuracy trained solely on experimental data. This approach may offer a cost-effective solution for generating accurate predictive models in scenarios where data collection is challenging, either due to rigid use of standard process settings or data collection cost and time constraints.</div></div>","PeriodicalId":20535,"journal":{"name":"Procedia CIRP","volume":"135 ","pages":"Pages 181-186"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia CIRP","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2212827125002586","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Manufacturing industries are increasingly focused on achieving sustainability targets, which has driven the development of environmental impact models often based on life cycle assessment (LCA) methods and databases. However, these databases tend to be too generic to ensure accurate modelling (e.g., using global or regional average impact values per unit of mass processed). To improve accuracy, companies can generate customized data inventories through experiments or simulations, but these approaches are typically costly, time-consuming, and may disrupt daily operations. This article introduces a partial physics-based, multi-fidelity machine learning approach to generate low-cost, environmental impact models tailored to specific manufacturing systems. The framework uses reduced-order, low-fidelity, physics-based models to capture the process dynamics, followed by transfer learning with small volumes of high-fidelity (e.g., experimental) data. This allows for accurate gate-to-gate environmental impact predictions without the need for extensive experimental campaigns. The framework is demonstrated on a lab-scale metal rolling mill for predicting power consumption in gate-to-gate assessments. A simple slab analysis metal forming model trains the base learner, and adaptive boosting is used for transfer learning on experimental data. The framework achieved superior performance, requiring 13% less experimental data than a standalone machine learning model of the same accuracy trained solely on experimental data. This approach may offer a cost-effective solution for generating accurate predictive models in scenarios where data collection is challenging, either due to rigid use of standard process settings or data collection cost and time constraints.