Exploring the limitations in how ChatGPT introduces environmental justice issues in the United States: A case study of 3,108 counties

IF 7.6 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Junghwan Kim , Jinhyung Lee , Kee Moon Jang , Ismini Lourentzou
{"title":"Exploring the limitations in how ChatGPT introduces environmental justice issues in the United States: A case study of 3,108 counties","authors":"Junghwan Kim ,&nbsp;Jinhyung Lee ,&nbsp;Kee Moon Jang ,&nbsp;Ismini Lourentzou","doi":"10.1016/j.tele.2023.102085","DOIUrl":null,"url":null,"abstract":"<div><p>The potential of Generative AI, such as ChatGPT, has sparked discussions among researchers and the public. This study empirically explores the capabilities and limitations of ChatGPT, specifically its portrayal of environmental justice issues. Using OpenAI’s ChatGPT API, we asked ChatGPT (GPT-4) to answer questions about environmental justice issues in 3,108 counties in the contiguous United States. Our findings suggest that ChatGPT provides a general overview of environmental justice issues. Consistent with research, ChatGPT appears to acknowledge the disproportionate distribution of environmental pollutants and toxic materials in low-income communities and those inhabited by people of color. However, our results also highlighted ChatGPT’s shortcomings in detailing specific local environmental justice issues, particularly in disadvantaged (e.g., rural and low-income) counties. For instance, ChatGPT could not provide information on local-specific environmental justice issues for 2,593 of 3,108 counties (83%). The results of the binary logistic regression model revealed that counties with lower population densities, higher percentages of white population, and lower incomes are less likely to receive local-specific responses from the ChatGPT. This could indicate a potential regional disparity in the volume and quality of training data, hinting at geographical biases. Our findings offer insights and implications for educators, researchers, and AI developers.</p></div>","PeriodicalId":48257,"journal":{"name":"Telematics and Informatics","volume":"86 ","pages":"Article 102085"},"PeriodicalIF":7.6000,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telematics and Informatics","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0736585323001491","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The potential of Generative AI, such as ChatGPT, has sparked discussions among researchers and the public. This study empirically explores the capabilities and limitations of ChatGPT, specifically its portrayal of environmental justice issues. Using OpenAI’s ChatGPT API, we asked ChatGPT (GPT-4) to answer questions about environmental justice issues in 3,108 counties in the contiguous United States. Our findings suggest that ChatGPT provides a general overview of environmental justice issues. Consistent with research, ChatGPT appears to acknowledge the disproportionate distribution of environmental pollutants and toxic materials in low-income communities and those inhabited by people of color. However, our results also highlighted ChatGPT’s shortcomings in detailing specific local environmental justice issues, particularly in disadvantaged (e.g., rural and low-income) counties. For instance, ChatGPT could not provide information on local-specific environmental justice issues for 2,593 of 3,108 counties (83%). The results of the binary logistic regression model revealed that counties with lower population densities, higher percentages of white population, and lower incomes are less likely to receive local-specific responses from the ChatGPT. This could indicate a potential regional disparity in the volume and quality of training data, hinting at geographical biases. Our findings offer insights and implications for educators, researchers, and AI developers.

Abstract Image

探讨ChatGPT在美国引入环境正义问题的局限性:以3108个县为例
ChatGPT等生成式人工智能的潜力引发了研究人员和公众的讨论。本研究实证地探讨了ChatGPT的能力和局限性,特别是它对环境正义问题的描述。使用OpenAI的ChatGPT API,我们要求ChatGPT (GPT-4)回答有关美国3108个县的环境正义问题。我们的研究结果表明,ChatGPT提供了环境正义问题的总体概述。与研究一致,ChatGPT似乎承认环境污染物和有毒物质在低收入社区和有色人种居住区的不成比例的分布。然而,我们的结果也突出了ChatGPT在详细说明具体的地方环境正义问题方面的缺点,特别是在弱势县(如农村和低收入)。例如,ChatGPT无法为3,108个县中的2,593个(83%)提供当地特定环境正义问题的信息。二元逻辑回归模型的结果显示,人口密度较低、白人人口比例较高、收入较低的县不太可能从ChatGPT获得当地特定的响应。这可能表明训练数据的数量和质量可能存在区域差异,暗示存在地理偏差。我们的发现为教育工作者、研究人员和人工智能开发者提供了见解和启示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Telematics and Informatics
Telematics and Informatics INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
17.00
自引率
4.70%
发文量
104
审稿时长
24 days
期刊介绍: Telematics and Informatics is an interdisciplinary journal that publishes cutting-edge theoretical and methodological research exploring the social, economic, geographic, political, and cultural impacts of digital technologies. It covers various application areas, such as smart cities, sensors, information fusion, digital society, IoT, cyber-physical technologies, privacy, knowledge management, distributed work, emergency response, mobile communications, health informatics, social media's psychosocial effects, ICT for sustainable development, blockchain, e-commerce, and e-government.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信