Automating untruths: ChatGPT, self-managed medication abortion, and the threat of misinformation in a post-Roe world

IF 3.2 Q1 HEALTH CARE SCIENCES & SERVICES
Hayley V. McMahon, Bryan D. McMahon
{"title":"Automating untruths: ChatGPT, self-managed medication abortion, and the threat of misinformation in a post-Roe world","authors":"Hayley V. McMahon, Bryan D. McMahon","doi":"10.3389/fdgth.2024.1287186","DOIUrl":null,"url":null,"abstract":"ChatGPT is a generative artificial intelligence chatbot that uses natural language processing to understand and execute prompts in a human-like manner. While the chatbot has become popular as a source of information among the public, experts have expressed concerns about the number of false and misleading statements made by ChatGPT. Many people search online for information about self-managed medication abortion, which has become even more common following the overturning of Roe v. Wade. It is likely that ChatGPT is also being used as a source of this information; however, little is known about its accuracy.To assess the accuracy of ChatGPT responses to common questions regarding self-managed abortion safety and the process of using abortion pills.We prompted ChatGPT with 65 questions about self-managed medication abortion, which produced approximately 11,000 words of text. We qualitatively coded all data in MAXQDA and performed thematic analysis.ChatGPT responses correctly described clinician-managed medication abortion as both safe and effective. In contrast, self-managed medication abortion was inaccurately described as dangerous and associated with an increase in the risk of complications, which was attributed to the lack of clinician supervision.ChatGPT repeatedly provided responses that overstated the risk of complications associated with self-managed medication abortion in ways that directly contradict the expansive body of evidence demonstrating that self-managed medication abortion is both safe and effective. The chatbot's tendency to perpetuate health misinformation and associated stigma regarding self-managed medication abortions poses a threat to public health and reproductive autonomy.","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2024.1287186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

ChatGPT is a generative artificial intelligence chatbot that uses natural language processing to understand and execute prompts in a human-like manner. While the chatbot has become popular as a source of information among the public, experts have expressed concerns about the number of false and misleading statements made by ChatGPT. Many people search online for information about self-managed medication abortion, which has become even more common following the overturning of Roe v. Wade. It is likely that ChatGPT is also being used as a source of this information; however, little is known about its accuracy.To assess the accuracy of ChatGPT responses to common questions regarding self-managed abortion safety and the process of using abortion pills.We prompted ChatGPT with 65 questions about self-managed medication abortion, which produced approximately 11,000 words of text. We qualitatively coded all data in MAXQDA and performed thematic analysis.ChatGPT responses correctly described clinician-managed medication abortion as both safe and effective. In contrast, self-managed medication abortion was inaccurately described as dangerous and associated with an increase in the risk of complications, which was attributed to the lack of clinician supervision.ChatGPT repeatedly provided responses that overstated the risk of complications associated with self-managed medication abortion in ways that directly contradict the expansive body of evidence demonstrating that self-managed medication abortion is both safe and effective. The chatbot's tendency to perpetuate health misinformation and associated stigma regarding self-managed medication abortions poses a threat to public health and reproductive autonomy.
不实信息的自动化:ChatGPT、自我管理的药物流产以及《罗伊法案》后世界中错误信息的威胁
ChatGPT 是一个生成式人工智能聊天机器人,它使用自然语言处理技术以类似人类的方式理解和执行提示。虽然该聊天机器人作为信息来源在公众中很受欢迎,但专家们对 ChatGPT 的虚假和误导性陈述数量表示担忧。许多人在网上搜索有关自我管理药物流产的信息,而在 "罗伊诉韦德 "案被推翻后,这种情况变得更加普遍。我们向 ChatGPT 提出了 65 个有关自行管理药物流产的问题,这些问题产生了约 11,000 字的文本,我们对所有数据进行了定性编码。我们在 MAXQDA 中对所有数据进行了定性编码,并进行了主题分析。ChatGPT 的回答正确地描述了临床医生管理的药物流产既安全又有效。与此相反,自我管理的药物流产被不准确地描述为危险的,并与并发症风险增加有关,这归因于缺乏临床医生的监督。ChatGPT 的回复多次夸大了与自我管理的药物流产相关的并发症风险,这直接违背了大量证据表明自我管理的药物流产是安全有效的。聊天机器人倾向于延续与自我管理药物流产有关的健康误导和相关污名,这对公众健康和生殖自主权构成了威胁。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信