使用计算机辅助文本分析对CDC和人工智能生成的健康信息进行比较分析。

Q2 Social Sciences
Anna Young, Foluke Omosun
{"title":"使用计算机辅助文本分析对CDC和人工智能生成的健康信息进行比较分析。","authors":"Anna Young, Foluke Omosun","doi":"10.1080/17538068.2025.2487378","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>AI-generated content is easy to access. Members of the public use it as an alternative or to supplement official sources, such as the Centers for Disease Control and Prevention (CDC). However, the quality and reliability of AI-generated health information is questionable. This study aims to understand how AI-generated health information differs from that provided by the CDC, particularly in terms of sentiment, readability, and overall quality. Language expectancy theory serves as a framework and offers insights into how people's expectations of message content from different sources can influence perceived credibility and persuasiveness of such information.</p><p><strong>Methods: </strong>Computer-aided text analysis was used to analyze 20 text entries from the CDC and 20 entries generated by ChatGPT 3.5. Content analysis utilizing human coders was used to assess the quality of information.</p><p><strong>Results: </strong>ChatGPT used more negative sentiments, particularly words associated with anger, sadness, and disgust. The CDC's health messages were significantly easier to read than those generated by ChatGPT. Furthermore, ChatGPT's responses required a higher reading grade level. In terms of quality, the CDC's information was a little higher quality than that of ChatGPT, with significant differences in DISCERN scores.</p><p><strong>Conclusion: </strong>Public health professionals need to educate the general public about the complexity and quality of AI-generated health information. Health literacy programs should address topics about quality and readability of AI-generated content. Other recommendations for using AI-generated health information are provided.</p>","PeriodicalId":38052,"journal":{"name":"Journal of Communication in Healthcare","volume":" ","pages":"1-12"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comparative analysis of CDC and AI-generated health information using computer-aided text analysis.\",\"authors\":\"Anna Young, Foluke Omosun\",\"doi\":\"10.1080/17538068.2025.2487378\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>AI-generated content is easy to access. Members of the public use it as an alternative or to supplement official sources, such as the Centers for Disease Control and Prevention (CDC). However, the quality and reliability of AI-generated health information is questionable. This study aims to understand how AI-generated health information differs from that provided by the CDC, particularly in terms of sentiment, readability, and overall quality. Language expectancy theory serves as a framework and offers insights into how people's expectations of message content from different sources can influence perceived credibility and persuasiveness of such information.</p><p><strong>Methods: </strong>Computer-aided text analysis was used to analyze 20 text entries from the CDC and 20 entries generated by ChatGPT 3.5. Content analysis utilizing human coders was used to assess the quality of information.</p><p><strong>Results: </strong>ChatGPT used more negative sentiments, particularly words associated with anger, sadness, and disgust. The CDC's health messages were significantly easier to read than those generated by ChatGPT. Furthermore, ChatGPT's responses required a higher reading grade level. In terms of quality, the CDC's information was a little higher quality than that of ChatGPT, with significant differences in DISCERN scores.</p><p><strong>Conclusion: </strong>Public health professionals need to educate the general public about the complexity and quality of AI-generated health information. Health literacy programs should address topics about quality and readability of AI-generated content. Other recommendations for using AI-generated health information are provided.</p>\",\"PeriodicalId\":38052,\"journal\":{\"name\":\"Journal of Communication in Healthcare\",\"volume\":\" \",\"pages\":\"1-12\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communication in Healthcare\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/17538068.2025.2487378\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communication in Healthcare","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17538068.2025.2487378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

背景:ai生成的内容易于访问。公众使用它作为替代或补充官方来源,如疾病控制和预防中心(CDC)。然而,人工智能生成的健康信息的质量和可靠性值得怀疑。本研究旨在了解人工智能生成的健康信息与CDC提供的健康信息有何不同,特别是在情感、可读性和整体质量方面。语言期望理论作为一个框架,提供了人们对来自不同来源的信息内容的期望如何影响这些信息的可信度和说服力的见解。方法:采用计算机辅助文本分析方法,对来自CDC的20个文本条目和ChatGPT 3.5生成的20个文本条目进行分析。利用人类编码器的内容分析被用来评估信息的质量。结果:ChatGPT使用了更多的负面情绪,尤其是与愤怒、悲伤和厌恶相关的词汇。CDC的健康信息比ChatGPT生成的信息更容易阅读。此外,ChatGPT的回答需要更高的阅读等级水平。在质量方面,CDC的信息质量略高于ChatGPT,在DISCERN得分上存在显著差异。结论:公共卫生专业人员需要向公众宣传人工智能生成的卫生信息的复杂性和质量。健康素养项目应解决人工智能生成内容的质量和可读性问题。还就使用人工智能生成的健康信息提出了其他建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A comparative analysis of CDC and AI-generated health information using computer-aided text analysis.

Background: AI-generated content is easy to access. Members of the public use it as an alternative or to supplement official sources, such as the Centers for Disease Control and Prevention (CDC). However, the quality and reliability of AI-generated health information is questionable. This study aims to understand how AI-generated health information differs from that provided by the CDC, particularly in terms of sentiment, readability, and overall quality. Language expectancy theory serves as a framework and offers insights into how people's expectations of message content from different sources can influence perceived credibility and persuasiveness of such information.

Methods: Computer-aided text analysis was used to analyze 20 text entries from the CDC and 20 entries generated by ChatGPT 3.5. Content analysis utilizing human coders was used to assess the quality of information.

Results: ChatGPT used more negative sentiments, particularly words associated with anger, sadness, and disgust. The CDC's health messages were significantly easier to read than those generated by ChatGPT. Furthermore, ChatGPT's responses required a higher reading grade level. In terms of quality, the CDC's information was a little higher quality than that of ChatGPT, with significant differences in DISCERN scores.

Conclusion: Public health professionals need to educate the general public about the complexity and quality of AI-generated health information. Health literacy programs should address topics about quality and readability of AI-generated content. Other recommendations for using AI-generated health information are provided.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Communication in Healthcare
Journal of Communication in Healthcare Social Sciences-Communication
CiteScore
2.90
自引率
0.00%
发文量
44
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信