Risk communication and large language models

IF 1.9 Q3 PUBLIC ADMINISTRATION
Daniel Sledge, Herschel F. Thomas
{"title":"Risk communication and large language models","authors":"Daniel Sledge, Herschel F. Thomas","doi":"10.1002/rhc3.12303","DOIUrl":null,"url":null,"abstract":"The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM‐based chat programs for risk communication. We examine ChatGPT‐generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"http://Ready.gov\">Ready.gov</jats:ext-link> website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"http://Ready.gov\">Ready.gov</jats:ext-link>. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.","PeriodicalId":21362,"journal":{"name":"Risk, Hazards & Crisis in Public Policy","volume":"4 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk, Hazards & Crisis in Public Policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/rhc3.12303","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC ADMINISTRATION","Score":null,"Total":0}
引用次数: 0

Abstract

The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM‐based chat programs for risk communication. We examine ChatGPT‐generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
风险交流和大型语言模型
大语言模型(LLM)与聊天机器人界面(如 ChatGPT)的广泛结合,代表了风险交流和管理发展的一个潜在关键时刻。在本文中,我们将探讨当前基于 LLM 的聊天程序浪潮对风险交流的影响。我们研究了 ChatGPT 生成的对 24 种不同危险情况的反应。我们将这些回复与美国国土安全部 Ready.gov 网站上发布的公众指南进行了比较。我们发现,虽然 ChatGPT 没有生成错误或误导性的回复,但 ChatGPT 的回复在与联邦政府指南的相似度方面通常不尽如人意。虽然这些回复以权威的口吻表达,但有时会遗漏重要信息,其强调的重点也与 Ready.gov 中的内容大相径庭。展望未来,研究人员和政府官员都必须努力利用 LLM 的力量向公众提供信息,并承认信息流可能从政府官员和专家转向个人所带来的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.50
自引率
8.60%
发文量
20
期刊介绍: Scholarship on risk, hazards, and crises (emergencies, disasters, or public policy/organizational crises) has developed into mature and distinct fields of inquiry. Risk, Hazards & Crisis in Public Policy (RHCPP) addresses the governance implications of the important questions raised for the respective fields. The relationships between risk, hazards, and crisis raise fundamental questions with broad social science and policy implications. During unstable situations of acute or chronic danger and substantial uncertainty (i.e. a crisis), important and deeply rooted societal institutions, norms, and values come into play. The purpose of RHCPP is to provide a forum for research and commentary that examines societies’ understanding of and measures to address risk,hazards, and crises, how public policies do and should address these concerns, and to what effect. The journal is explicitly designed to encourage a broad range of perspectives by integrating work from a variety of disciplines. The journal will look at social science theory and policy design across the spectrum of risks and crises — including natural and technological hazards, public health crises, terrorism, and societal and environmental disasters. Papers will analyze the ways societies deal with both unpredictable and predictable events as public policy questions, which include topics such as crisis governance, loss and liability, emergency response, agenda setting, and the social and cultural contexts in which hazards, risks and crises are perceived and defined. Risk, Hazards & Crisis in Public Policy invites dialogue and is open to new approaches. We seek scholarly work that combines academic quality with practical relevance. We especially welcome authors writing on the governance of risk and crises to submit their manuscripts.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信