TwinLab: a framework for data-efficient training of non-intrusive reduced-order models for digital twins

IF 1.5 4区 工程技术 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Maximilian Kannapinn, Michael Schäfer, Oliver Weeger
{"title":"TwinLab: a framework for data-efficient training of non-intrusive reduced-order models for digital twins","authors":"Maximilian Kannapinn, Michael Schäfer, Oliver Weeger","doi":"10.1108/ec-11-2023-0855","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>Simulation-based digital twins represent an effort to provide high-accuracy real-time insights into operational physical processes. However, the computation time of many multi-physical simulation models is far from real-time. It might even exceed sensible time frames to produce sufficient data for training data-driven reduced-order models. This study presents TwinLab, a framework for data-efficient, yet accurate training of neural-ODE type reduced-order models with only two data sets.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>Correlations between test errors of reduced-order models and distinct features of corresponding training data are investigated. Having found the single best data sets for training, a second data set is sought with the help of similarity and error measures to enrich the training process effectively.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>Adding a suitable second training data set in the training process reduces the test error by up to 49% compared to the best base reduced-order model trained only with one data set. Such a second training data set should at least yield a good reduced-order model on its own and exhibit higher levels of dissimilarity to the base training data set regarding the respective excitation signal. Moreover, the base reduced-order model should have elevated test errors on the second data set. The relative error of the time series ranges from 0.18% to 0.49%. Prediction speed-ups of up to a factor of 36,000 are observed.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>The proposed computational framework facilitates the automated, data-efficient extraction of non-intrusive reduced-order models for digital twins from existing simulation models, independent of the simulation software.</p><!--/ Abstract__block -->","PeriodicalId":50522,"journal":{"name":"Engineering Computations","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Computations","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1108/ec-11-2023-0855","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

Simulation-based digital twins represent an effort to provide high-accuracy real-time insights into operational physical processes. However, the computation time of many multi-physical simulation models is far from real-time. It might even exceed sensible time frames to produce sufficient data for training data-driven reduced-order models. This study presents TwinLab, a framework for data-efficient, yet accurate training of neural-ODE type reduced-order models with only two data sets.

Design/methodology/approach

Correlations between test errors of reduced-order models and distinct features of corresponding training data are investigated. Having found the single best data sets for training, a second data set is sought with the help of similarity and error measures to enrich the training process effectively.

Findings

Adding a suitable second training data set in the training process reduces the test error by up to 49% compared to the best base reduced-order model trained only with one data set. Such a second training data set should at least yield a good reduced-order model on its own and exhibit higher levels of dissimilarity to the base training data set regarding the respective excitation signal. Moreover, the base reduced-order model should have elevated test errors on the second data set. The relative error of the time series ranges from 0.18% to 0.49%. Prediction speed-ups of up to a factor of 36,000 are observed.

Originality/value

The proposed computational framework facilitates the automated, data-efficient extraction of non-intrusive reduced-order models for digital twins from existing simulation models, independent of the simulation software.

TwinLab:数字双胞胎非侵入式降阶模型的数据高效训练框架
目的 以仿真为基础的数字孪生代表着一种努力,为操作物理过程提供高精度的实时洞察。然而,许多多物理仿真模型的计算时间远非实时。它甚至可能超过合理的时间范围,无法产生足够的数据来训练数据驱动的低阶模型。本研究提出了 TwinLab,这是一个仅用两组数据就能高效、准确地训练神经-ODE 类型降阶模型的框架。研究结果在训练过程中添加合适的第二组训练数据,与仅用一组数据训练的最佳基础降阶模型相比,测试误差最多可减少 49%。这样的第二组训练数据至少应能产生一个良好的降阶模型,并在各自的激励信号方面与基础训练数据集表现出更高的相似度。此外,基础降阶模型在第二组数据上的测试误差也应增大。时间序列的相对误差在 0.18% 到 0.49% 之间。原创性/价值所提出的计算框架有助于从现有的仿真模型中自动、高效地提取数字双胞胎的非侵入式降阶模型,且不受仿真软件的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Computations
Engineering Computations 工程技术-工程:综合
CiteScore
3.40
自引率
6.20%
发文量
61
审稿时长
5 months
期刊介绍: The journal presents its readers with broad coverage across all branches of engineering and science of the latest development and application of new solution algorithms, innovative numerical methods and/or solution techniques directed at the utilization of computational methods in engineering analysis, engineering design and practice. For more information visit: http://www.emeraldgrouppublishing.com/ec.htm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信