{"title":"铜损伤校正深度神经方法的训练与可解释性。","authors":"K. Hickmann, Skylar Callis, Stephen Andrews","doi":"10.1115/vvuq2023-108759","DOIUrl":null,"url":null,"abstract":"\n We present an application of convolutional neural networks for calibration of a tensile plasticity (TePla) damage model simulating the spallation in copper under high-explosive shock loading. Using a high-fidelity, multi-physics simulation developed by the Advanced Simulation and Computing program at Los Alamos National Laboratory (LANL), we simulate hundreds of variations of a high-explosive shock experiment involving a copper coupon. From this synthetic data, we train neural networks to learn the inverse mapping between the coupon’s late-time density field, or an associated synthetic radiograph, and the simulation’s TePla damage parameters. It is demonstrated that, using a simple convolutional architecture, we can train networks to infer damage parameters from density fields accurately. Neural network inference directly from synthetic radiographs is significantly more challenging. Application of machine-learning methods must be accompanied by an analysis of how they are making inferences in order to build confidence in predictions and to identify likely shortcomings of the technique. To understand what the model is learning, individual layer outputs are extracted and examined. Each layer in the network identifies multiple features. However, each of these features are not necessarily of equal importance in the network’s final prediction of a given damage parameter. By examining the features overlaid on the input hydrodynamic fields, we assess the question of whether or not the model’s accuracy can be attributed to human-recognizable characteristics. In this work we give a detailed description of our data-generation methods and the learning problem we address. We then outline our neural architecture trained for damage calibration and discuss considerations made during training and evaluation of accuracy. Methods for human interpretation of the network’s inference process are then put forward, including extraction of learned features from the trained network and techniques to assess sensitivity of inferences to the learned features.","PeriodicalId":387733,"journal":{"name":"ASME 2023 Verification, Validation, and Uncertainty Quantification Symposium","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Training and Interpretability of Deep-Neural Methods for Damage Calibration in Copper.\",\"authors\":\"K. Hickmann, Skylar Callis, Stephen Andrews\",\"doi\":\"10.1115/vvuq2023-108759\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n We present an application of convolutional neural networks for calibration of a tensile plasticity (TePla) damage model simulating the spallation in copper under high-explosive shock loading. Using a high-fidelity, multi-physics simulation developed by the Advanced Simulation and Computing program at Los Alamos National Laboratory (LANL), we simulate hundreds of variations of a high-explosive shock experiment involving a copper coupon. From this synthetic data, we train neural networks to learn the inverse mapping between the coupon’s late-time density field, or an associated synthetic radiograph, and the simulation’s TePla damage parameters. It is demonstrated that, using a simple convolutional architecture, we can train networks to infer damage parameters from density fields accurately. Neural network inference directly from synthetic radiographs is significantly more challenging. Application of machine-learning methods must be accompanied by an analysis of how they are making inferences in order to build confidence in predictions and to identify likely shortcomings of the technique. To understand what the model is learning, individual layer outputs are extracted and examined. Each layer in the network identifies multiple features. However, each of these features are not necessarily of equal importance in the network’s final prediction of a given damage parameter. By examining the features overlaid on the input hydrodynamic fields, we assess the question of whether or not the model’s accuracy can be attributed to human-recognizable characteristics. In this work we give a detailed description of our data-generation methods and the learning problem we address. We then outline our neural architecture trained for damage calibration and discuss considerations made during training and evaluation of accuracy. Methods for human interpretation of the network’s inference process are then put forward, including extraction of learned features from the trained network and techniques to assess sensitivity of inferences to the learned features.\",\"PeriodicalId\":387733,\"journal\":{\"name\":\"ASME 2023 Verification, Validation, and Uncertainty Quantification Symposium\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ASME 2023 Verification, Validation, and Uncertainty Quantification Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/vvuq2023-108759\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ASME 2023 Verification, Validation, and Uncertainty Quantification Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/vvuq2023-108759","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Training and Interpretability of Deep-Neural Methods for Damage Calibration in Copper.
We present an application of convolutional neural networks for calibration of a tensile plasticity (TePla) damage model simulating the spallation in copper under high-explosive shock loading. Using a high-fidelity, multi-physics simulation developed by the Advanced Simulation and Computing program at Los Alamos National Laboratory (LANL), we simulate hundreds of variations of a high-explosive shock experiment involving a copper coupon. From this synthetic data, we train neural networks to learn the inverse mapping between the coupon’s late-time density field, or an associated synthetic radiograph, and the simulation’s TePla damage parameters. It is demonstrated that, using a simple convolutional architecture, we can train networks to infer damage parameters from density fields accurately. Neural network inference directly from synthetic radiographs is significantly more challenging. Application of machine-learning methods must be accompanied by an analysis of how they are making inferences in order to build confidence in predictions and to identify likely shortcomings of the technique. To understand what the model is learning, individual layer outputs are extracted and examined. Each layer in the network identifies multiple features. However, each of these features are not necessarily of equal importance in the network’s final prediction of a given damage parameter. By examining the features overlaid on the input hydrodynamic fields, we assess the question of whether or not the model’s accuracy can be attributed to human-recognizable characteristics. In this work we give a detailed description of our data-generation methods and the learning problem we address. We then outline our neural architecture trained for damage calibration and discuss considerations made during training and evaluation of accuracy. Methods for human interpretation of the network’s inference process are then put forward, including extraction of learned features from the trained network and techniques to assess sensitivity of inferences to the learned features.