Yizheng Wang, Jinshuai Bai, Mohammad Sadegh Eshaghi, Cosmin Anitescu, Xiaoying Zhuang, Timon Rabczuk, Yinghua Liu
{"title":"物理信息神经网络中的迁移学习:完全微调,轻量级微调和低秩适应","authors":"Yizheng Wang, Jinshuai Bai, Mohammad Sadegh Eshaghi, Cosmin Anitescu, Xiaoying Zhuang, Timon Rabczuk, Yinghua Liu","doi":"10.1002/msd2.70030","DOIUrl":null,"url":null,"abstract":"<p>AI for PDEs has garnered significant attention, particularly physics-informed neural networks (PINNs). However, PINNs are typically limited to solving specific problems, and any changes in problem conditions necessitate retraining. Therefore, we explore the generalization capability of transfer learning in the strong and energy forms of PINNs across different boundary conditions, materials, and geometries. The transfer learning methods we employ include full finetuning, lightweight finetuning, and low-rank adaptation (LoRA). Numerical experiments include the Taylor-Green Vortex in fluid mechanics and functionally graded materials with elastic properties, as well as a square plate with a circular hole in solid mechanics. The results demonstrate that full finetuning and LoRA can significantly improve convergence speed while providing a slight enhancement in accuracy. However, the overall performance of lightweight finetuning is suboptimal, as its accuracy and convergence speed are inferior to those of full finetuning and LoRA.</p>","PeriodicalId":60486,"journal":{"name":"国际机械系统动力学学报(英文)","volume":"5 2","pages":"212-235"},"PeriodicalIF":3.4000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/msd2.70030","citationCount":"0","resultStr":"{\"title\":\"Transfer Learning in Physics-Informed Neurals Networks: Full Fine-Tuning, Lightweight Fine-Tuning, and Low-Rank Adaptation\",\"authors\":\"Yizheng Wang, Jinshuai Bai, Mohammad Sadegh Eshaghi, Cosmin Anitescu, Xiaoying Zhuang, Timon Rabczuk, Yinghua Liu\",\"doi\":\"10.1002/msd2.70030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>AI for PDEs has garnered significant attention, particularly physics-informed neural networks (PINNs). However, PINNs are typically limited to solving specific problems, and any changes in problem conditions necessitate retraining. Therefore, we explore the generalization capability of transfer learning in the strong and energy forms of PINNs across different boundary conditions, materials, and geometries. The transfer learning methods we employ include full finetuning, lightweight finetuning, and low-rank adaptation (LoRA). Numerical experiments include the Taylor-Green Vortex in fluid mechanics and functionally graded materials with elastic properties, as well as a square plate with a circular hole in solid mechanics. The results demonstrate that full finetuning and LoRA can significantly improve convergence speed while providing a slight enhancement in accuracy. However, the overall performance of lightweight finetuning is suboptimal, as its accuracy and convergence speed are inferior to those of full finetuning and LoRA.</p>\",\"PeriodicalId\":60486,\"journal\":{\"name\":\"国际机械系统动力学学报(英文)\",\"volume\":\"5 2\",\"pages\":\"212-235\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/msd2.70030\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"国际机械系统动力学学报(英文)\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/msd2.70030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MECHANICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"国际机械系统动力学学报(英文)","FirstCategoryId":"1087","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/msd2.70030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
Transfer Learning in Physics-Informed Neurals Networks: Full Fine-Tuning, Lightweight Fine-Tuning, and Low-Rank Adaptation
AI for PDEs has garnered significant attention, particularly physics-informed neural networks (PINNs). However, PINNs are typically limited to solving specific problems, and any changes in problem conditions necessitate retraining. Therefore, we explore the generalization capability of transfer learning in the strong and energy forms of PINNs across different boundary conditions, materials, and geometries. The transfer learning methods we employ include full finetuning, lightweight finetuning, and low-rank adaptation (LoRA). Numerical experiments include the Taylor-Green Vortex in fluid mechanics and functionally graded materials with elastic properties, as well as a square plate with a circular hole in solid mechanics. The results demonstrate that full finetuning and LoRA can significantly improve convergence speed while providing a slight enhancement in accuracy. However, the overall performance of lightweight finetuning is suboptimal, as its accuracy and convergence speed are inferior to those of full finetuning and LoRA.