{"title":"When AI sees hotter: Overestimation bias in large language model climate assessments.","authors":"Tenzin Tamang, Ruilin Zheng","doi":"10.1177/09636625251351575","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) have emerged as a novel form of media, capable of generating human-like text and facilitating interactive communications. However, these systems are subject to concerns regarding inherent biases, as their training on vast text corpora may encode and amplify societal biases. This study investigates overestimation bias in LLM-generated climate assessments, wherein the impacts of climate change are exaggerated relative to expert consensus. Through non-parametric statistical methods, the study compares expert ratings from the Intergovernmental Panel on Climate Change 2023 Synthesis Report with responses from GPT-family LLMs. Results indicate that LLMs systematically overestimate climate change impacts, and that this bias is more pronounced when the models are prompted in the role of a climate scientist. These findings underscore the critical need to align LLM-generated climate assessments with expert consensus to prevent misperception and foster informed public discourse.</p>","PeriodicalId":48094,"journal":{"name":"Public Understanding of Science","volume":" ","pages":"9636625251351575"},"PeriodicalIF":3.5000,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Public Understanding of Science","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/09636625251351575","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) have emerged as a novel form of media, capable of generating human-like text and facilitating interactive communications. However, these systems are subject to concerns regarding inherent biases, as their training on vast text corpora may encode and amplify societal biases. This study investigates overestimation bias in LLM-generated climate assessments, wherein the impacts of climate change are exaggerated relative to expert consensus. Through non-parametric statistical methods, the study compares expert ratings from the Intergovernmental Panel on Climate Change 2023 Synthesis Report with responses from GPT-family LLMs. Results indicate that LLMs systematically overestimate climate change impacts, and that this bias is more pronounced when the models are prompted in the role of a climate scientist. These findings underscore the critical need to align LLM-generated climate assessments with expert consensus to prevent misperception and foster informed public discourse.
期刊介绍:
Public Understanding of Science is a fully peer reviewed international journal covering all aspects of the inter-relationships between science (including technology and medicine) and the public. Public Understanding of Science is the only journal to cover all aspects of the inter-relationships between science (including technology and medicine) and the public. Topics Covered Include... ·surveys of public understanding and attitudes towards science and technology ·perceptions of science ·popular representations of science ·scientific and para-scientific belief systems ·science in schools