Bissem Gill, John Bonamer, Henry A. Kuechly, Rajul Gupta, Scottie Emmert, Sarah Kurkowski, Kim Hasselfeld, Brian M. Grawe
{"title":"ChatGPT is a promising tool to increase readability of orthopedic research consents","authors":"Bissem Gill, John Bonamer, Henry A. Kuechly, Rajul Gupta, Scottie Emmert, Sarah Kurkowski, Kim Hasselfeld, Brian M. Grawe","doi":"10.1177/22104917231208212","DOIUrl":null,"url":null,"abstract":"Background/Purpose: Informed consent is a fundamental ethical requirement in medical research, ensuring that participants have a comprehensive understanding of the risks and benefits associated with their participation. Clinical researchers must ensure effective and efficient communication of the implications of participation, but the complexity and length of traditional research consent forms can impede comprehension and create barriers to effective communication between researchers and participants. For this reason, the American Medical Association recommends a 6th grade reading level for all patient-facing medical information. Can the large language model, ChatGPT-3.5, improve readability while simultaneously preserving information necessary for adequate informed consent?. Methods: Nineteen IRB approved Orthopedic surgery research consent forms were entered into ChatGPT with instructions to make the form “readable at a 6th-grade level.” Post ChatGPT consent forms were assessed using commonly used readability metrics. Additionally, a single Orthopedic surgeon who has practiced independently for 15 years assessed the forms for accuracy and retention of imperative informed consent elements. Results: The median differences between pre-ChatGPT and post-ChatGPT were statistically significant for every readability metric (all p < 0.001) and all favored the post-ChatGPT consent as being more readable. Two language metrics, Automated Readability Index and Raygor Grade Level, indicated the post-ChatGPT consent forms could meet the AMA's recommended 6th-grade reading level. Twelve of 19 post-ChatGPT consents had at least one error. Conclusion: ChatGPT can significantly improve the readability of Orthopedic clinical research consent forms, but these edited consents are not without mistakes and cannot reach the AMA's recommended 6th grade reading level. Therefore, ChatGPT should be used as a tool to supplement the writing and editing process of human researchers.","PeriodicalId":517288,"journal":{"name":"Journal of Orthopaedics, Trauma and Rehabilitation","volume":"222 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Orthopaedics, Trauma and Rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/22104917231208212","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background/Purpose: Informed consent is a fundamental ethical requirement in medical research, ensuring that participants have a comprehensive understanding of the risks and benefits associated with their participation. Clinical researchers must ensure effective and efficient communication of the implications of participation, but the complexity and length of traditional research consent forms can impede comprehension and create barriers to effective communication between researchers and participants. For this reason, the American Medical Association recommends a 6th grade reading level for all patient-facing medical information. Can the large language model, ChatGPT-3.5, improve readability while simultaneously preserving information necessary for adequate informed consent?. Methods: Nineteen IRB approved Orthopedic surgery research consent forms were entered into ChatGPT with instructions to make the form “readable at a 6th-grade level.” Post ChatGPT consent forms were assessed using commonly used readability metrics. Additionally, a single Orthopedic surgeon who has practiced independently for 15 years assessed the forms for accuracy and retention of imperative informed consent elements. Results: The median differences between pre-ChatGPT and post-ChatGPT were statistically significant for every readability metric (all p < 0.001) and all favored the post-ChatGPT consent as being more readable. Two language metrics, Automated Readability Index and Raygor Grade Level, indicated the post-ChatGPT consent forms could meet the AMA's recommended 6th-grade reading level. Twelve of 19 post-ChatGPT consents had at least one error. Conclusion: ChatGPT can significantly improve the readability of Orthopedic clinical research consent forms, but these edited consents are not without mistakes and cannot reach the AMA's recommended 6th grade reading level. Therefore, ChatGPT should be used as a tool to supplement the writing and editing process of human researchers.