{"title":"Physics-informed modularized neural network for advanced building control by deep reinforcement learning","authors":"Zixin Jiang, Xuezheng Wang, Bing Dong","doi":"10.1016/j.adapen.2025.100237","DOIUrl":null,"url":null,"abstract":"<div><div>Physics-informed machine learning (PIML) provides a promising solution for building energy modeling and can be used as a virtual environment to enable reinforcement learning (RL) agents to interact and learn. However, how to integrate physics priors efficiently, evaluate the effectiveness of physics constraints, balance model accuracy and physics consistency, and enable real-world implementation remain open challenges. To address these gaps, this study introduces a Physics-Informed Modularized Neural Network (PI-ModNN), which integrates physics priors through a physics-informed model structure, loss functions, and hard constraints. A new evaluation matrix called “temperature response violation” is developed to quantify the physical consistency of data-driven building dynamic models under varying control inputs and training data sizes. Additionally, a physics prior evaluation framework based on “rule importance” is proposed to quantify the contribution of each individual physical priors, offering guidance on selecting appropriate PIML techniques. The results indicate that incorporating physical priors does not always improve model performance; inappropriate physical priors could decrease model accuracy and consistency. However, hard constraints effectively enforce model consistency. Furthermore, we present a general workflow for developing control-oriented PIML models and integrating them with deep reinforcement learning (DRL). Following this framework, a case study of implementation DRL in an office space for three months demonstrates potential energy savings of 31.4%. Finally, we provide a general guideline for integrating data-driven models with advanced building control through a four-step evaluation framework, paving the way for reliable and scalable implementation of advanced building controls.</div></div>","PeriodicalId":34615,"journal":{"name":"Advances in Applied Energy","volume":"19 ","pages":"Article 100237"},"PeriodicalIF":13.8000,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Applied Energy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666792425000319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0
Abstract
Physics-informed machine learning (PIML) provides a promising solution for building energy modeling and can be used as a virtual environment to enable reinforcement learning (RL) agents to interact and learn. However, how to integrate physics priors efficiently, evaluate the effectiveness of physics constraints, balance model accuracy and physics consistency, and enable real-world implementation remain open challenges. To address these gaps, this study introduces a Physics-Informed Modularized Neural Network (PI-ModNN), which integrates physics priors through a physics-informed model structure, loss functions, and hard constraints. A new evaluation matrix called “temperature response violation” is developed to quantify the physical consistency of data-driven building dynamic models under varying control inputs and training data sizes. Additionally, a physics prior evaluation framework based on “rule importance” is proposed to quantify the contribution of each individual physical priors, offering guidance on selecting appropriate PIML techniques. The results indicate that incorporating physical priors does not always improve model performance; inappropriate physical priors could decrease model accuracy and consistency. However, hard constraints effectively enforce model consistency. Furthermore, we present a general workflow for developing control-oriented PIML models and integrating them with deep reinforcement learning (DRL). Following this framework, a case study of implementation DRL in an office space for three months demonstrates potential energy savings of 31.4%. Finally, we provide a general guideline for integrating data-driven models with advanced building control through a four-step evaluation framework, paving the way for reliable and scalable implementation of advanced building controls.