Eamon Shamil,Tsz Ki Ko,Ka Siu Fan,James Schuster-Bruce,Mustafa Jaafar,Sadie Khwaja,Nicholas Eynon-Lewis,Alwyn Ray D'Souza,Peter Andrews
{"title":"评估在线患者信息的质量和可读性:英国耳鼻喉科患者信息电子传单与人工智能生成器的响应对比。","authors":"Eamon Shamil,Tsz Ki Ko,Ka Siu Fan,James Schuster-Bruce,Mustafa Jaafar,Sadie Khwaja,Nicholas Eynon-Lewis,Alwyn Ray D'Souza,Peter Andrews","doi":"10.1055/a-2413-3675","DOIUrl":null,"url":null,"abstract":"BACKGROUND\r\nThe evolution of artificial intelligence has introduced new ways to disseminate health information, including natural language processing models like ChatGPT. However, the quality and readability of such digitally-generated information remains understudied. This study is the first to compare the quality and readability of digitally-generated health information against leaflets produced by professionals.\r\n\r\nMETHODOLOGY\r\nPatient information leaflets for five ENT UK leaflets and their corresponding ChatGPT responses were extracted from the Internet. Assessors with various degree of medical knowledge evaluated the content using the Ensuring Quality Information for Patients (EQIP) tool and readability tools including the Flesch-Kincaid Grade Level (FKGL). Statistical analysis was performed to identify differences between leaflets, assessors, and sources of information.\r\n\r\nRESULTS\r\nENT UK leaflets were of moderate quality, scoring a median EQIP of 23. Statistically significant differences in overall EQIP score were identified between ENT UK leaflets but ChatGPT responses were of uniform quality. Non-specialist doctors rated the highest EQIP scores while medical students scored the lowest. The mean readability of ENT UK leaflets was higher than ChatGPT responses. The information metrics of ENT UK leaflets were moderate and varied between topics. Equivalent ChatGPT information provided comparable content quality, but with reduced readability.\r\n\r\nCONCLUSIONS\r\nChatGPT patient information and professionally-produced leaflets had comparable content, but LLM content were required a higher reading age. With the increasing use of online health resources, this study highlights the need for a balanced approach that considers optimises both the quality and readability of patient education materials.","PeriodicalId":12195,"journal":{"name":"Facial Plastic Surgery","volume":"60 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing the quality and readability of online patient information: ENT UK patient information e-leaflets vs responses by a Generative Artificial Intelligence.\",\"authors\":\"Eamon Shamil,Tsz Ki Ko,Ka Siu Fan,James Schuster-Bruce,Mustafa Jaafar,Sadie Khwaja,Nicholas Eynon-Lewis,Alwyn Ray D'Souza,Peter Andrews\",\"doi\":\"10.1055/a-2413-3675\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"BACKGROUND\\r\\nThe evolution of artificial intelligence has introduced new ways to disseminate health information, including natural language processing models like ChatGPT. However, the quality and readability of such digitally-generated information remains understudied. This study is the first to compare the quality and readability of digitally-generated health information against leaflets produced by professionals.\\r\\n\\r\\nMETHODOLOGY\\r\\nPatient information leaflets for five ENT UK leaflets and their corresponding ChatGPT responses were extracted from the Internet. Assessors with various degree of medical knowledge evaluated the content using the Ensuring Quality Information for Patients (EQIP) tool and readability tools including the Flesch-Kincaid Grade Level (FKGL). Statistical analysis was performed to identify differences between leaflets, assessors, and sources of information.\\r\\n\\r\\nRESULTS\\r\\nENT UK leaflets were of moderate quality, scoring a median EQIP of 23. Statistically significant differences in overall EQIP score were identified between ENT UK leaflets but ChatGPT responses were of uniform quality. Non-specialist doctors rated the highest EQIP scores while medical students scored the lowest. The mean readability of ENT UK leaflets was higher than ChatGPT responses. The information metrics of ENT UK leaflets were moderate and varied between topics. Equivalent ChatGPT information provided comparable content quality, but with reduced readability.\\r\\n\\r\\nCONCLUSIONS\\r\\nChatGPT patient information and professionally-produced leaflets had comparable content, but LLM content were required a higher reading age. With the increasing use of online health resources, this study highlights the need for a balanced approach that considers optimises both the quality and readability of patient education materials.\",\"PeriodicalId\":12195,\"journal\":{\"name\":\"Facial Plastic Surgery\",\"volume\":\"60 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Facial Plastic Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1055/a-2413-3675\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Facial Plastic Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2413-3675","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
Assessing the quality and readability of online patient information: ENT UK patient information e-leaflets vs responses by a Generative Artificial Intelligence.
BACKGROUND
The evolution of artificial intelligence has introduced new ways to disseminate health information, including natural language processing models like ChatGPT. However, the quality and readability of such digitally-generated information remains understudied. This study is the first to compare the quality and readability of digitally-generated health information against leaflets produced by professionals.
METHODOLOGY
Patient information leaflets for five ENT UK leaflets and their corresponding ChatGPT responses were extracted from the Internet. Assessors with various degree of medical knowledge evaluated the content using the Ensuring Quality Information for Patients (EQIP) tool and readability tools including the Flesch-Kincaid Grade Level (FKGL). Statistical analysis was performed to identify differences between leaflets, assessors, and sources of information.
RESULTS
ENT UK leaflets were of moderate quality, scoring a median EQIP of 23. Statistically significant differences in overall EQIP score were identified between ENT UK leaflets but ChatGPT responses were of uniform quality. Non-specialist doctors rated the highest EQIP scores while medical students scored the lowest. The mean readability of ENT UK leaflets was higher than ChatGPT responses. The information metrics of ENT UK leaflets were moderate and varied between topics. Equivalent ChatGPT information provided comparable content quality, but with reduced readability.
CONCLUSIONS
ChatGPT patient information and professionally-produced leaflets had comparable content, but LLM content were required a higher reading age. With the increasing use of online health resources, this study highlights the need for a balanced approach that considers optimises both the quality and readability of patient education materials.
期刊介绍:
Facial Plastic Surgery is a journal that publishes topic-specific issues covering areas of aesthetic and reconstructive plastic surgery as it relates to the head, neck, and face. The journal''s scope includes issues devoted to scar revision, periorbital and mid-face rejuvenation, facial trauma, facial implants, rhinoplasty, neck reconstruction, cleft palate, face lifts, as well as various other emerging minimally invasive procedures.
Authors provide a global perspective on each topic, critically evaluate recent works in the field, and apply it to clinical practice.