{"title":"使用计算机辅助文本分析对CDC和人工智能生成的健康信息进行比较分析。","authors":"Anna Young, Foluke Omosun","doi":"10.1080/17538068.2025.2487378","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>AI-generated content is easy to access. Members of the public use it as an alternative or to supplement official sources, such as the Centers for Disease Control and Prevention (CDC). However, the quality and reliability of AI-generated health information is questionable. This study aims to understand how AI-generated health information differs from that provided by the CDC, particularly in terms of sentiment, readability, and overall quality. Language expectancy theory serves as a framework and offers insights into how people's expectations of message content from different sources can influence perceived credibility and persuasiveness of such information.</p><p><strong>Methods: </strong>Computer-aided text analysis was used to analyze 20 text entries from the CDC and 20 entries generated by ChatGPT 3.5. Content analysis utilizing human coders was used to assess the quality of information.</p><p><strong>Results: </strong>ChatGPT used more negative sentiments, particularly words associated with anger, sadness, and disgust. The CDC's health messages were significantly easier to read than those generated by ChatGPT. Furthermore, ChatGPT's responses required a higher reading grade level. In terms of quality, the CDC's information was a little higher quality than that of ChatGPT, with significant differences in DISCERN scores.</p><p><strong>Conclusion: </strong>Public health professionals need to educate the general public about the complexity and quality of AI-generated health information. Health literacy programs should address topics about quality and readability of AI-generated content. Other recommendations for using AI-generated health information are provided.</p>","PeriodicalId":38052,"journal":{"name":"Journal of Communication in Healthcare","volume":" ","pages":"1-12"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comparative analysis of CDC and AI-generated health information using computer-aided text analysis.\",\"authors\":\"Anna Young, Foluke Omosun\",\"doi\":\"10.1080/17538068.2025.2487378\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>AI-generated content is easy to access. Members of the public use it as an alternative or to supplement official sources, such as the Centers for Disease Control and Prevention (CDC). However, the quality and reliability of AI-generated health information is questionable. This study aims to understand how AI-generated health information differs from that provided by the CDC, particularly in terms of sentiment, readability, and overall quality. Language expectancy theory serves as a framework and offers insights into how people's expectations of message content from different sources can influence perceived credibility and persuasiveness of such information.</p><p><strong>Methods: </strong>Computer-aided text analysis was used to analyze 20 text entries from the CDC and 20 entries generated by ChatGPT 3.5. Content analysis utilizing human coders was used to assess the quality of information.</p><p><strong>Results: </strong>ChatGPT used more negative sentiments, particularly words associated with anger, sadness, and disgust. The CDC's health messages were significantly easier to read than those generated by ChatGPT. Furthermore, ChatGPT's responses required a higher reading grade level. In terms of quality, the CDC's information was a little higher quality than that of ChatGPT, with significant differences in DISCERN scores.</p><p><strong>Conclusion: </strong>Public health professionals need to educate the general public about the complexity and quality of AI-generated health information. Health literacy programs should address topics about quality and readability of AI-generated content. Other recommendations for using AI-generated health information are provided.</p>\",\"PeriodicalId\":38052,\"journal\":{\"name\":\"Journal of Communication in Healthcare\",\"volume\":\" \",\"pages\":\"1-12\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communication in Healthcare\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/17538068.2025.2487378\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communication in Healthcare","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17538068.2025.2487378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
A comparative analysis of CDC and AI-generated health information using computer-aided text analysis.
Background: AI-generated content is easy to access. Members of the public use it as an alternative or to supplement official sources, such as the Centers for Disease Control and Prevention (CDC). However, the quality and reliability of AI-generated health information is questionable. This study aims to understand how AI-generated health information differs from that provided by the CDC, particularly in terms of sentiment, readability, and overall quality. Language expectancy theory serves as a framework and offers insights into how people's expectations of message content from different sources can influence perceived credibility and persuasiveness of such information.
Methods: Computer-aided text analysis was used to analyze 20 text entries from the CDC and 20 entries generated by ChatGPT 3.5. Content analysis utilizing human coders was used to assess the quality of information.
Results: ChatGPT used more negative sentiments, particularly words associated with anger, sadness, and disgust. The CDC's health messages were significantly easier to read than those generated by ChatGPT. Furthermore, ChatGPT's responses required a higher reading grade level. In terms of quality, the CDC's information was a little higher quality than that of ChatGPT, with significant differences in DISCERN scores.
Conclusion: Public health professionals need to educate the general public about the complexity and quality of AI-generated health information. Health literacy programs should address topics about quality and readability of AI-generated content. Other recommendations for using AI-generated health information are provided.