Pavel Pipek, Shane Canavan, Susan Canavan, César Capinha, Jérôme MW Gippet, Ana Novoa, Petr Pyšek, Allan T Souza, Shengyu Wang, Ivan Jarić
{"title":"Sustainability of large language models—user perspective","authors":"Pavel Pipek, Shane Canavan, Susan Canavan, César Capinha, Jérôme MW Gippet, Ana Novoa, Petr Pyšek, Allan T Souza, Shengyu Wang, Ivan Jarić","doi":"10.1002/fee.2856","DOIUrl":null,"url":null,"abstract":"<p>Large language models (LLMs) are becoming an integral part of our daily work. In the field of ecology, LLMs are already being applied to a wide range of tasks, such as extracting georeferenced data or taxonomic entities from unstructured texts, information synthesis, coding, and teaching (<i>Methods Ecol Evol</i> 2024; <i>Npj Biodivers</i> 2024). Further development and increased use of LLMs in ecology, as in science in general, is likely to intensify and accelerate the process of research and increase publication output—thus pressuring scientists to keep up with the elevated pace, which in turn creates a feedback loop by promoting even greater LLM use.</p><p>However, this all comes at a cost. While not directly borne by end users, aside from occasional response delays, LLMs require considerable computational power and are energy-demanding during both their initial training phase and their subsequent operational use (<i>Nature</i> 2025). Furthermore, partly externalized energy costs are linked to intensive searching and processing of discovered sources as part of Deep Research. Currently, it remains challenging to estimate the total energy costs of LLMs, largely due to limited transparency from their companies of origin.</p><p>The ways to improve LLM sustainability, for example by algorithmic or hardware optimization and renewable energy use during development and operation, have been extensively examined. However, we contend that the role of end users, including researchers, has been largely overlooked. End users can and should be part of the solution, to their own benefit. By selecting less resource-intensive options, optimizing their prompts, or selecting platforms that use renewable energy sources, users would not only contribute to LLM sustainability but also improve their own workflows. Besides reducing energy consumption, a more parsimonious use of LLMs could also lessen other harms, such as cooling water use and extraction of rare earth metals. Consequently, companies should support users in making such informed choices.</p><p>For instance, most companies provide LLMs of different complexities or sizes, often measured by the number of parameters. Relying on the largest models can be excessive in many cases (eg answering emails, checking grammar, or conducting searches that could be done by traditional search engines). By selecting a smaller, less energy-intensive model, users can also benefit from quicker responses. In addition, some smaller models are trained to perform specific tasks, eg coding, and can thus match or outperform bigger ones.</p><p>Another potential way to reduce energy costs and save user time is to trim the expected length and complexity of the model's response. For example, for some questions, an elaborate answer is unnecessary, if not counterproductive (ie because it takes time to read through); in other cases, only code is needed, without any further explanations. And this can be directly specified in the prompt.</p><p>In both everyday conversation with people and interactions with LLMs, an answer is most likely to meet expectations if the request is well formulated. So it is also important to craft prompts to avoid rerunning models multiple times. While a comprehensive overview of optimal prompting lies beyond the scope of this editorial, we suggest a few simple approaches below.</p><p>Often it is advisable to leverage user expertise and provide the model with explicit guidance. For example, with coding, a prompt could include names of the tools (eg libraries or packages for specific coding language) that should be used, or even a basic description of the algorithm, including any known pitfalls, to avoid repeated prompts and regeneration of the whole code. Further, if a requested task requires up-to-date information, or if the tools that the model learned during its training are already outdated, the model can be asked to conduct a web search.</p><p>Whenever a prompt fits its purpose, it should be saved and, if possible, shared. Indeed, several repositories of prompts already exist, some of which include prompts that can be used in research. However, repositories focused on prompts tailored to research in natural sciences are still missing. There are also several prompt marketplaces, in which people offer prompts already optimized for specific tasks, thus reducing the need for repeated trial and error and helping to use less energy and time in the process.</p><p>Likewise, a multitude of browser extensions (plug-ins) have been developed to help with prompt engineering. However, such tools should be built directly into the models’ web interface, so that a more sustainable prompt can more easily be used by a larger number of users. The companies that provide LLMs can indeed play a pivotal role in promoting sustainable LLM use, by giving user-friendly guidance on the most efficient use of their products. In addition, companies should be more transparent about their models’ energy costs (and the energy sources they use) and navigate interested users to select less demanding options or let them choose to run only models powered by renewable energy sources.</p><p>While we believe that companies should not transfer responsibility of LLM sustainability to end users, the users still have an important role to play, by ensuring their use of LLMs is rational and effective, and by pressuring companies to give them the power to do that.</p>","PeriodicalId":171,"journal":{"name":"Frontiers in Ecology and the Environment","volume":"23 5","pages":""},"PeriodicalIF":10.0000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/fee.2856","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Ecology and the Environment","FirstCategoryId":"93","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/fee.2856","RegionNum":1,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) are becoming an integral part of our daily work. In the field of ecology, LLMs are already being applied to a wide range of tasks, such as extracting georeferenced data or taxonomic entities from unstructured texts, information synthesis, coding, and teaching (Methods Ecol Evol 2024; Npj Biodivers 2024). Further development and increased use of LLMs in ecology, as in science in general, is likely to intensify and accelerate the process of research and increase publication output—thus pressuring scientists to keep up with the elevated pace, which in turn creates a feedback loop by promoting even greater LLM use.
However, this all comes at a cost. While not directly borne by end users, aside from occasional response delays, LLMs require considerable computational power and are energy-demanding during both their initial training phase and their subsequent operational use (Nature 2025). Furthermore, partly externalized energy costs are linked to intensive searching and processing of discovered sources as part of Deep Research. Currently, it remains challenging to estimate the total energy costs of LLMs, largely due to limited transparency from their companies of origin.
The ways to improve LLM sustainability, for example by algorithmic or hardware optimization and renewable energy use during development and operation, have been extensively examined. However, we contend that the role of end users, including researchers, has been largely overlooked. End users can and should be part of the solution, to their own benefit. By selecting less resource-intensive options, optimizing their prompts, or selecting platforms that use renewable energy sources, users would not only contribute to LLM sustainability but also improve their own workflows. Besides reducing energy consumption, a more parsimonious use of LLMs could also lessen other harms, such as cooling water use and extraction of rare earth metals. Consequently, companies should support users in making such informed choices.
For instance, most companies provide LLMs of different complexities or sizes, often measured by the number of parameters. Relying on the largest models can be excessive in many cases (eg answering emails, checking grammar, or conducting searches that could be done by traditional search engines). By selecting a smaller, less energy-intensive model, users can also benefit from quicker responses. In addition, some smaller models are trained to perform specific tasks, eg coding, and can thus match or outperform bigger ones.
Another potential way to reduce energy costs and save user time is to trim the expected length and complexity of the model's response. For example, for some questions, an elaborate answer is unnecessary, if not counterproductive (ie because it takes time to read through); in other cases, only code is needed, without any further explanations. And this can be directly specified in the prompt.
In both everyday conversation with people and interactions with LLMs, an answer is most likely to meet expectations if the request is well formulated. So it is also important to craft prompts to avoid rerunning models multiple times. While a comprehensive overview of optimal prompting lies beyond the scope of this editorial, we suggest a few simple approaches below.
Often it is advisable to leverage user expertise and provide the model with explicit guidance. For example, with coding, a prompt could include names of the tools (eg libraries or packages for specific coding language) that should be used, or even a basic description of the algorithm, including any known pitfalls, to avoid repeated prompts and regeneration of the whole code. Further, if a requested task requires up-to-date information, or if the tools that the model learned during its training are already outdated, the model can be asked to conduct a web search.
Whenever a prompt fits its purpose, it should be saved and, if possible, shared. Indeed, several repositories of prompts already exist, some of which include prompts that can be used in research. However, repositories focused on prompts tailored to research in natural sciences are still missing. There are also several prompt marketplaces, in which people offer prompts already optimized for specific tasks, thus reducing the need for repeated trial and error and helping to use less energy and time in the process.
Likewise, a multitude of browser extensions (plug-ins) have been developed to help with prompt engineering. However, such tools should be built directly into the models’ web interface, so that a more sustainable prompt can more easily be used by a larger number of users. The companies that provide LLMs can indeed play a pivotal role in promoting sustainable LLM use, by giving user-friendly guidance on the most efficient use of their products. In addition, companies should be more transparent about their models’ energy costs (and the energy sources they use) and navigate interested users to select less demanding options or let them choose to run only models powered by renewable energy sources.
While we believe that companies should not transfer responsibility of LLM sustainability to end users, the users still have an important role to play, by ensuring their use of LLMs is rational and effective, and by pressuring companies to give them the power to do that.
期刊介绍:
Frontiers in Ecology and the Environment is a publication by the Ecological Society of America that focuses on the significance of ecology and environmental science in various aspects of research and problem-solving. The journal covers topics such as biodiversity conservation, ecosystem preservation, natural resource management, public policy, and other related areas.
The publication features a range of content, including peer-reviewed articles, editorials, commentaries, letters, and occasional special issues and topical series. It releases ten issues per year, excluding January and July. ESA members receive both print and electronic copies of the journal, while institutional subscriptions are also available.
Frontiers in Ecology and the Environment is highly regarded in the field, as indicated by its ranking in the 2021 Journal Citation Reports by Clarivate Analytics. The journal is ranked 4th out of 174 in ecology journals and 11th out of 279 in environmental sciences journals. Its impact factor for 2021 is reported as 13.789, which further demonstrates its influence and importance in the scientific community.