{"title":"Towards carbon-aware AI: a systematic prisma review and taxonomy of green architectures, hardware life-cycle, and energy-efficient algorithms","authors":"Raghavendra M. Devadas, Sowmya T","doi":"10.1186/s42162-026-00651-8","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The increasing computational requirements in modern Artificial Intelligence (AI) development have raised the stakes when it comes to the environmental sustainability of machine learning use and application. The topic that has received relatively little treatment in previous research is the carbon footprint of AI systems, but it is not studied continuously across algorithms, hardware, life cycle, and use. This work takes stock of the state-of-the-art to provide a comprehensive review of carbon-aware AI from the bottom up for the entire gamut of computation. Consistent with PRISMA principles, by way of systematic search across prominent academic databases and repositories (2018–2025), we identified 784 unique citations and extracted 62 studies that satisfied pre-established inclusion criteria. These studies were organized by four domains: algorithms for energy (20), hardware and accelerators (15), Life-cycle assessment (LCA) (9), and operation under deployment (11). Three major conclusions follow from the synthesis. First, algorithmic efficiency – including pruning, quantization, and sparsity might reduce computational burden to meet carbon goals; however, they only achieve concrete carbon reduction when accounted for in hardware and data centre setups. Second, life-cycle analyses show that while operational energy continues to account for most emissions during large-scale training, the embodied carbon from semiconductor fabrication plays an increasingly important role in fleets with a lot of equipment or frequent refreshes. Third, deployment decisions such as data center location, carbon-aware scheduling, and cloud–edge workloads placement bring much more variance on real emissions compared to what can be achieved at model-level optimisation. Between sectors, inconsistencies in methodology – notably for carbon reporting, system boundaries, and energy telemetry – hinder reproducibility and comparison of findings across studies. To overcome these limitations, this review suggests a research agenda with respect to standardized carbon accounting, hardware–software co-optimization, better available embodied-emission data, and the inclusion of carbon in decision-making for AutoML and scheduling systems. This comprehensive integration will serve as a stepping-stone towards the development of sustainable AI across academia and industry.</p>\n </div>","PeriodicalId":538,"journal":{"name":"Energy Informatics","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1186/s42162-026-00651-8.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy Informatics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1186/s42162-026-00651-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Energy","Score":null,"Total":0}
引用次数: 0
Abstract
The increasing computational requirements in modern Artificial Intelligence (AI) development have raised the stakes when it comes to the environmental sustainability of machine learning use and application. The topic that has received relatively little treatment in previous research is the carbon footprint of AI systems, but it is not studied continuously across algorithms, hardware, life cycle, and use. This work takes stock of the state-of-the-art to provide a comprehensive review of carbon-aware AI from the bottom up for the entire gamut of computation. Consistent with PRISMA principles, by way of systematic search across prominent academic databases and repositories (2018–2025), we identified 784 unique citations and extracted 62 studies that satisfied pre-established inclusion criteria. These studies were organized by four domains: algorithms for energy (20), hardware and accelerators (15), Life-cycle assessment (LCA) (9), and operation under deployment (11). Three major conclusions follow from the synthesis. First, algorithmic efficiency – including pruning, quantization, and sparsity might reduce computational burden to meet carbon goals; however, they only achieve concrete carbon reduction when accounted for in hardware and data centre setups. Second, life-cycle analyses show that while operational energy continues to account for most emissions during large-scale training, the embodied carbon from semiconductor fabrication plays an increasingly important role in fleets with a lot of equipment or frequent refreshes. Third, deployment decisions such as data center location, carbon-aware scheduling, and cloud–edge workloads placement bring much more variance on real emissions compared to what can be achieved at model-level optimisation. Between sectors, inconsistencies in methodology – notably for carbon reporting, system boundaries, and energy telemetry – hinder reproducibility and comparison of findings across studies. To overcome these limitations, this review suggests a research agenda with respect to standardized carbon accounting, hardware–software co-optimization, better available embodied-emission data, and the inclusion of carbon in decision-making for AutoML and scheduling systems. This comprehensive integration will serve as a stepping-stone towards the development of sustainable AI across academia and industry.