Improve Machine Learning carbon footprint using Nvidia GPU and Mixed Precision training for classification algorithms

Andrew Antonopoulos
{"title":"Improve Machine Learning carbon footprint using Nvidia GPU and Mixed Precision training for classification algorithms","authors":"Andrew Antonopoulos","doi":"arxiv-2409.07853","DOIUrl":null,"url":null,"abstract":"This study was part of my dissertation for my master degree and compares the\npower consumption using the default floating point (32bit) and Nvidia mixed\nprecision (16bit and 32bit) while training a classification ML model. A custom\nPC with specific hardware was built to perform the experiments, and different\nML hyper-parameters, such as batch size, neurons, and epochs, were chosen to\nbuild Deep Neural Networks (DNN). Additionally, various software was used\nduring the experiments to collect the power consumption data in Watts from the\nGraphics Processing Unit (GPU), Central Processing Unit (CPU), Random Access\nMemory (RAM) and manually from a wattmeter connected to the wall. A\nbenchmarking test with default hyper parameter values for the DNN was used as a\nreference, while the experiments used a combination of different settings. The\nresults were recorded in Excel, and descriptive statistics were chosen to\ncalculate the mean between the groups and compare them using graphs and tables.\nThe outcome was positive when using mixed precision combined with specific\nhyper-parameters. Compared to the benchmarking, the optimisation for the\nclassification reduced the power consumption between 7 and 11 Watts. Similarly,\nthe carbon footprint is reduced because the calculation uses the same power\nconsumption data. Still, a consideration is required when configuring\nhyper-parameters because it can negatively affect hardware performance.\nHowever, this research required inferential statistics, specifically ANOVA and\nT-test, to compare the relationship between the means. Furthermore, tests\nindicated no statistical significance of the relationship between the\nbenchmarking and experiments. However, a more extensive implementation with a\ncluster of GPUs can increase the sample size significantly, as it is an\nessential factor and can change the outcome of the statistical analysis.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study was part of my dissertation for my master degree and compares the power consumption using the default floating point (32bit) and Nvidia mixed precision (16bit and 32bit) while training a classification ML model. A custom PC with specific hardware was built to perform the experiments, and different ML hyper-parameters, such as batch size, neurons, and epochs, were chosen to build Deep Neural Networks (DNN). Additionally, various software was used during the experiments to collect the power consumption data in Watts from the Graphics Processing Unit (GPU), Central Processing Unit (CPU), Random Access Memory (RAM) and manually from a wattmeter connected to the wall. A benchmarking test with default hyper parameter values for the DNN was used as a reference, while the experiments used a combination of different settings. The results were recorded in Excel, and descriptive statistics were chosen to calculate the mean between the groups and compare them using graphs and tables. The outcome was positive when using mixed precision combined with specific hyper-parameters. Compared to the benchmarking, the optimisation for the classification reduced the power consumption between 7 and 11 Watts. Similarly, the carbon footprint is reduced because the calculation uses the same power consumption data. Still, a consideration is required when configuring hyper-parameters because it can negatively affect hardware performance. However, this research required inferential statistics, specifically ANOVA and T-test, to compare the relationship between the means. Furthermore, tests indicated no statistical significance of the relationship between the benchmarking and experiments. However, a more extensive implementation with a cluster of GPUs can increase the sample size significantly, as it is an essential factor and can change the outcome of the statistical analysis.
使用 Nvidia GPU 和混合精度训练分类算法,改善机器学习的碳足迹
本研究是我硕士论文的一部分,比较了在训练分类 ML 模型时使用默认浮点(32 位)和 Nvidia 混合精度(16 位和 32 位)的功耗。为了进行实验,我们构建了一台具有特定硬件的定制电脑,并选择了不同的 ML 超参数,如批量大小、神经元和历时,以构建深度神经网络(DNN)。此外,在实验过程中还使用了各种软件从图形处理器(GPU)、中央处理器(CPU)、随机存取存储器(RAM)收集功耗数据(单位:瓦特),并通过连接到墙上的电表手动收集数据。基准测试使用 DNN 的默认超参数值作为参考,而实验则使用不同设置的组合。实验结果记录在 Excel 中,并选择了描述性统计来计算各组之间的平均值,并使用图表对其进行比较。与基准相比,分类优化降低了 7 到 11 瓦特的功耗。同样,碳足迹也减少了,因为计算使用了相同的功耗数据。不过,在配置超参数时仍需考虑,因为这会对硬件性能产生负面影响。然而,这项研究需要推断统计,特别是方差分析和T检验,以比较平均值之间的关系。此外,测试表明基准测试和实验之间的关系在统计上并不显著。然而,使用 GPU 群集进行更广泛的实施可以显著增加样本量,因为它是一个重要因素,可以改变统计分析的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信