Joseph McDonald, J. Kurdzo, P. Stepanian, M. Veillette, David Bestor, Michael Jones, V. Gadepally, S. Samsi
{"title":"气象雷达算法高效图像分割训练的性能评估","authors":"Joseph McDonald, J. Kurdzo, P. Stepanian, M. Veillette, David Bestor, Michael Jones, V. Gadepally, S. Samsi","doi":"10.1109/HPEC55821.2022.9926400","DOIUrl":null,"url":null,"abstract":"Deep Learning has a dramatically increasing demand for compute resources and a corresponding increase in the energy required to develop, explore, and test model architectures for various applications. Parameter tuning for networks customarily involves training multiple models in a search over a grid of parameter choices either randomly or exhaustively, and strategies applying complex search methods to identify candidate model architectures require significant computation for each possible architecture sampled in the model spaces. However, these approaches of extensively training many individual models in order to choose a single best performing model for future inference can seem unnecessarily wasteful at a time when energy efficiency and minimizing computing's environmental impact are increasingly important. Techniques or algorithms that reduce the computational budget to identify and train accurate deep networks among many options are of great need. This work considers one recently proposed approach, Training Speed Estimation, alongside deep learning approaches for a common hydrometeor classification problem, hail prediction through semantic image segmentation. We apply this method to the training of a variety of segmentation models and evaluate its effectiveness as a performance tracking approach for energy-aware neural network applications. This approach, together with early-stopping, offers a straightforward strategy for minimizing energy expenditure. By measuring consumption and estimating the level of energy savings, we are able to characterize this strategy as a practical method for minimizing deep learning's energy and carbon impact.","PeriodicalId":200071,"journal":{"name":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance Estimation for Efficient Image Segmentation Training of Weather Radar Algorithms\",\"authors\":\"Joseph McDonald, J. Kurdzo, P. Stepanian, M. Veillette, David Bestor, Michael Jones, V. Gadepally, S. Samsi\",\"doi\":\"10.1109/HPEC55821.2022.9926400\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Learning has a dramatically increasing demand for compute resources and a corresponding increase in the energy required to develop, explore, and test model architectures for various applications. Parameter tuning for networks customarily involves training multiple models in a search over a grid of parameter choices either randomly or exhaustively, and strategies applying complex search methods to identify candidate model architectures require significant computation for each possible architecture sampled in the model spaces. However, these approaches of extensively training many individual models in order to choose a single best performing model for future inference can seem unnecessarily wasteful at a time when energy efficiency and minimizing computing's environmental impact are increasingly important. Techniques or algorithms that reduce the computational budget to identify and train accurate deep networks among many options are of great need. This work considers one recently proposed approach, Training Speed Estimation, alongside deep learning approaches for a common hydrometeor classification problem, hail prediction through semantic image segmentation. We apply this method to the training of a variety of segmentation models and evaluate its effectiveness as a performance tracking approach for energy-aware neural network applications. This approach, together with early-stopping, offers a straightforward strategy for minimizing energy expenditure. By measuring consumption and estimating the level of energy savings, we are able to characterize this strategy as a practical method for minimizing deep learning's energy and carbon impact.\",\"PeriodicalId\":200071,\"journal\":{\"name\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPEC55821.2022.9926400\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPEC55821.2022.9926400","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance Estimation for Efficient Image Segmentation Training of Weather Radar Algorithms
Deep Learning has a dramatically increasing demand for compute resources and a corresponding increase in the energy required to develop, explore, and test model architectures for various applications. Parameter tuning for networks customarily involves training multiple models in a search over a grid of parameter choices either randomly or exhaustively, and strategies applying complex search methods to identify candidate model architectures require significant computation for each possible architecture sampled in the model spaces. However, these approaches of extensively training many individual models in order to choose a single best performing model for future inference can seem unnecessarily wasteful at a time when energy efficiency and minimizing computing's environmental impact are increasingly important. Techniques or algorithms that reduce the computational budget to identify and train accurate deep networks among many options are of great need. This work considers one recently proposed approach, Training Speed Estimation, alongside deep learning approaches for a common hydrometeor classification problem, hail prediction through semantic image segmentation. We apply this method to the training of a variety of segmentation models and evaluate its effectiveness as a performance tracking approach for energy-aware neural network applications. This approach, together with early-stopping, offers a straightforward strategy for minimizing energy expenditure. By measuring consumption and estimating the level of energy savings, we are able to characterize this strategy as a practical method for minimizing deep learning's energy and carbon impact.