Reproductive health and ChatGPT: an evaluation of AI-Generated responses to commonly asked abortion questions.

IF 1.7 3区 医学 Q2 FAMILY STUDIES
Michelle Xu, Pamela Lotke, Melissa Figueroa, Nora Doty, Jonathan Baum
{"title":"Reproductive health and ChatGPT: an evaluation of AI-Generated responses to commonly asked abortion questions.","authors":"Michelle Xu, Pamela Lotke, Melissa Figueroa, Nora Doty, Jonathan Baum","doi":"10.1080/13691058.2025.2517289","DOIUrl":null,"url":null,"abstract":"<p><p>Recent assessments of ChatGPT in relation to a variety of pregnancy-related questions have shown mixed results. Rapidly evolving rules and regulations in the USA have led to a confusing abortion landscape, making up-to-date and evidence-based abortion information essential to those considering an abortion. The purpose of this study was to evaluate ChatGPT as a source of information for commonly asked medication and procedural abortion questions by performing a qualitative analysis. We queried ChatGPT-3.5 on ten fact-based abortion questions and ten clinical scenario abortion questions. Query responses were graded by three complex family planning physicians to be 'acceptable' or 'unacceptable' and 'complete' or 'incomplete'. The responses were then compared to evidence-based research published by the American College of Obstetricians and Gynaecologists (ACOG), the Society of Family Planning (SFP), PubMed-indexed evidence, as well as physician clinical experience. In our assessment, a grade of acceptable was given to 65% of responses, however a grade of complete was only given to 8% of responses. In general, fact-based questions were more accurate than clinical questions. Our analysis of ChatGPT suggested it can regurgitate facts found online, but it still lacks the ability to provide understanding and context to clinical scenarios that clinicians are better equipped to navigate.</p>","PeriodicalId":10799,"journal":{"name":"Culture, Health & Sexuality","volume":" ","pages":"1-13"},"PeriodicalIF":1.7000,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Culture, Health & Sexuality","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/13691058.2025.2517289","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"FAMILY STUDIES","Score":null,"Total":0}
引用次数: 0

Abstract

Recent assessments of ChatGPT in relation to a variety of pregnancy-related questions have shown mixed results. Rapidly evolving rules and regulations in the USA have led to a confusing abortion landscape, making up-to-date and evidence-based abortion information essential to those considering an abortion. The purpose of this study was to evaluate ChatGPT as a source of information for commonly asked medication and procedural abortion questions by performing a qualitative analysis. We queried ChatGPT-3.5 on ten fact-based abortion questions and ten clinical scenario abortion questions. Query responses were graded by three complex family planning physicians to be 'acceptable' or 'unacceptable' and 'complete' or 'incomplete'. The responses were then compared to evidence-based research published by the American College of Obstetricians and Gynaecologists (ACOG), the Society of Family Planning (SFP), PubMed-indexed evidence, as well as physician clinical experience. In our assessment, a grade of acceptable was given to 65% of responses, however a grade of complete was only given to 8% of responses. In general, fact-based questions were more accurate than clinical questions. Our analysis of ChatGPT suggested it can regurgitate facts found online, but it still lacks the ability to provide understanding and context to clinical scenarios that clinicians are better equipped to navigate.

生殖健康和ChatGPT:对人工智能生成的对常见堕胎问题的回答的评价。
最近对ChatGPT与各种妊娠相关问题的评估显示出不同的结果。在美国,快速发展的规则和条例导致了一个令人困惑的堕胎景观,使最新的和基于证据的堕胎信息对那些考虑堕胎的人至关重要。本研究的目的是通过进行定性分析来评估ChatGPT作为常见的药物和程序流产问题的信息来源。我们在ChatGPT-3.5中询问了10个基于事实的堕胎问题和10个临床情景堕胎问题。三名复杂的计划生育医生将询问回答分为“可接受”或“不可接受”、“完整”或“不完整”。然后将这些回答与美国妇产科学会(ACOG)、计划生育学会(SFP)发表的循证研究、pubmed索引证据以及医生临床经验进行比较。在我们的评估中,65%的回答得到了可接受的分数,然而只有8%的回答得到了完成的分数。一般来说,基于事实的问题比临床问题更准确。我们对ChatGPT的分析表明,它可以反刍网上发现的事实,但它仍然缺乏为临床医生更好地导航提供理解和背景的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.60
自引率
4.50%
发文量
80
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信