{"title":"加法特征归因方法:流体动力学和传热学可解释人工智能综述","authors":"Andrés Cremades, Sergio Hoyas, Ricardo Vinuesa","doi":"arxiv-2409.11992","DOIUrl":null,"url":null,"abstract":"The use of data-driven methods in fluid mechanics has surged dramatically in\nrecent years due to their capacity to adapt to the complex and multi-scale\nnature of turbulent flows, as well as to detect patterns in large-scale\nsimulations or experimental tests. In order to interpret the relationships\ngenerated in the models during the training process, numerical attributions\nneed to be assigned to the input features. One important example are the\nadditive-feature-attribution methods. These explainability methods link the\ninput features with the model prediction, providing an interpretation based on\na linear formulation of the models. The SHapley Additive exPlanations (SHAP\nvalues) are formulated as the only possible interpretation that offers a unique\nsolution for understanding the model. In this manuscript, the\nadditive-feature-attribution methods are presented, showing four common\nimplementations in the literature: kernel SHAP, tree SHAP, gradient SHAP, and\ndeep SHAP. Then, the main applications of the additive-feature-attribution\nmethods are introduced, dividing them into three main groups: turbulence\nmodeling, fluid-mechanics fundamentals, and applied problems in fluid dynamics\nand heat transfer. This review shows thatexplainability techniques, and in\nparticular additive-feature-attribution methods, are crucial for implementing\ninterpretable and physics-compliant deep-learning models in the fluid-mechanics\nfield.","PeriodicalId":501125,"journal":{"name":"arXiv - PHYS - Fluid Dynamics","volume":"33 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Additive-feature-attribution methods: a review on explainable artificial intelligence for fluid dynamics and heat transfer\",\"authors\":\"Andrés Cremades, Sergio Hoyas, Ricardo Vinuesa\",\"doi\":\"arxiv-2409.11992\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of data-driven methods in fluid mechanics has surged dramatically in\\nrecent years due to their capacity to adapt to the complex and multi-scale\\nnature of turbulent flows, as well as to detect patterns in large-scale\\nsimulations or experimental tests. In order to interpret the relationships\\ngenerated in the models during the training process, numerical attributions\\nneed to be assigned to the input features. One important example are the\\nadditive-feature-attribution methods. These explainability methods link the\\ninput features with the model prediction, providing an interpretation based on\\na linear formulation of the models. The SHapley Additive exPlanations (SHAP\\nvalues) are formulated as the only possible interpretation that offers a unique\\nsolution for understanding the model. In this manuscript, the\\nadditive-feature-attribution methods are presented, showing four common\\nimplementations in the literature: kernel SHAP, tree SHAP, gradient SHAP, and\\ndeep SHAP. Then, the main applications of the additive-feature-attribution\\nmethods are introduced, dividing them into three main groups: turbulence\\nmodeling, fluid-mechanics fundamentals, and applied problems in fluid dynamics\\nand heat transfer. This review shows thatexplainability techniques, and in\\nparticular additive-feature-attribution methods, are crucial for implementing\\ninterpretable and physics-compliant deep-learning models in the fluid-mechanics\\nfield.\",\"PeriodicalId\":501125,\"journal\":{\"name\":\"arXiv - PHYS - Fluid Dynamics\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Fluid Dynamics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11992\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Fluid Dynamics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Additive-feature-attribution methods: a review on explainable artificial intelligence for fluid dynamics and heat transfer
The use of data-driven methods in fluid mechanics has surged dramatically in
recent years due to their capacity to adapt to the complex and multi-scale
nature of turbulent flows, as well as to detect patterns in large-scale
simulations or experimental tests. In order to interpret the relationships
generated in the models during the training process, numerical attributions
need to be assigned to the input features. One important example are the
additive-feature-attribution methods. These explainability methods link the
input features with the model prediction, providing an interpretation based on
a linear formulation of the models. The SHapley Additive exPlanations (SHAP
values) are formulated as the only possible interpretation that offers a unique
solution for understanding the model. In this manuscript, the
additive-feature-attribution methods are presented, showing four common
implementations in the literature: kernel SHAP, tree SHAP, gradient SHAP, and
deep SHAP. Then, the main applications of the additive-feature-attribution
methods are introduced, dividing them into three main groups: turbulence
modeling, fluid-mechanics fundamentals, and applied problems in fluid dynamics
and heat transfer. This review shows thatexplainability techniques, and in
particular additive-feature-attribution methods, are crucial for implementing
interpretable and physics-compliant deep-learning models in the fluid-mechanics
field.