Pratiwi Rahadiani, Aria Kekalih, Diantha Soemantri, Desak Gede Budi Krisnamurti
{"title":"Exploring HTML5 Package Interactive Content in Supporting Learning Through Self-Paced Massive Open Online Courses on Healthy Aging: Mixed Methods Study.","authors":"Pratiwi Rahadiani, Aria Kekalih, Diantha Soemantri, Desak Gede Budi Krisnamurti","doi":"10.2196/45468","DOIUrl":"10.2196/45468","url":null,"abstract":"<p><strong>Background: </strong>The rapidly aging population and the growth of geriatric medicine in the field of internal medicine are not supported by sufficient gerontological training in many health care disciplines. There is rising awareness about the education and training needed to adequately prepare health care professionals to address the needs of the older adult population. Massive open online courses (MOOCs) might be the best alternative method of learning delivery in this context. However, the diversity of MOOC participants poses a challenge for MOOC providers to innovate in developing learning content that suits the needs and characters of participants.</p><p><strong>Objective: </strong>The primary outcome of this study was to explore students' perceptions and acceptance of HTML5 package (H5P) interactive content in self-paced MOOCs and its association with students' characteristics and experience in using MOOCs.</p><p><strong>Methods: </strong>This study used a cross-sectional design, combining qualitative and quantitative approaches. Participants, predominantly general practitioners from various regions of Indonesia with diverse educational backgrounds and age groups, completed pretests, engaged with H5P interactive content, and participated in forum discussions and posttests. Data were retrieved from the online questionnaire attached to a selected MOOC course. Students' perceptions and acceptance of H5P interactive content were rated on a 6-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). Data were analyzed using SPSS (IBM Corp) to examine demographics, computer literacy, acceptance, and perceptions of H5P interactive content. Quantitative analysis explored correlations, while qualitative analysis identified recurring themes from open-ended survey responses to determine students' perceptions.</p><p><strong>Results: </strong>In total, 184 MOOC participants agreed to participate in the study. Students demonstrated positive perceptions and a high level of acceptance of integrating H5P interactive content within the self-paced MOOC. Analysis of mean (SD) value across all responses consistently revealed favorable scores (greater than 5), ranging from 5.18 (SD 0.861) to 5.45 (SD 0.659) and 5.28 (SD 0.728) to 5.52 (SD 0.627), respectively. This finding underscores widespread satisfaction and robust acceptance of H5P interactive content. Students found the H5P interactive content more satisfying and fun, easier to understand, more effective, and more helpful in improving learning outcomes than material in the form of common documents and learning videos. There is a significant correlation between computer literacy, students' acceptance, and students' perceptions.</p><p><strong>Conclusions: </strong>Students from various backgrounds showed a high level of acceptance and positive perceptions of leveraging H5P interactive content in the self-paced MOOC. The findings suggest potential new uses of H5P interactive content in","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration of ChatGPT Into a Course for Medical Students: Explorative Study on Teaching Scenarios, Students' Perception, and Applications.","authors":"Anita V Thomae, Claudia M Witt, Jürgen Barth","doi":"10.2196/50545","DOIUrl":"10.2196/50545","url":null,"abstract":"<p><strong>Background: </strong>Text-generating artificial intelligence (AI) such as ChatGPT offers many opportunities and challenges in medical education. Acquiring practical skills necessary for using AI in a clinical context is crucial, especially for medical education.</p><p><strong>Objective: </strong>This explorative study aimed to investigate the feasibility of integrating ChatGPT into teaching units and to evaluate the course and the importance of AI-related competencies for medical students. Since a possible application of ChatGPT in the medical field could be the generation of information for patients, we further investigated how such information is perceived by students in terms of persuasiveness and quality.</p><p><strong>Methods: </strong>ChatGPT was integrated into 3 different teaching units of a blended learning course for medical students. Using a mixed methods approach, quantitative and qualitative data were collected. As baseline data, we assessed students' characteristics, including their openness to digital innovation. The students evaluated the integration of ChatGPT into the course and shared their thoughts regarding the future of text-generating AI in medical education. The course was evaluated based on the Kirkpatrick Model, with satisfaction, learning progress, and applicable knowledge considered as key assessment levels. In ChatGPT-integrating teaching units, students evaluated videos featuring information for patients regarding their persuasiveness on treatment expectations in a self-experience experiment and critically reviewed information for patients written using ChatGPT 3.5 based on different prompts.</p><p><strong>Results: </strong>A total of 52 medical students participated in the study. The comprehensive evaluation of the course revealed elevated levels of satisfaction, learning progress, and applicability specifically in relation to the ChatGPT-integrating teaching units. Furthermore, all evaluation levels demonstrated an association with each other. Higher openness to digital innovation was associated with higher satisfaction and, to a lesser extent, with higher applicability. AI-related competencies in other courses of the medical curriculum were perceived as highly important by medical students. Qualitative analysis highlighted potential use cases of ChatGPT in teaching and learning. In ChatGPT-integrating teaching units, students rated information for patients generated using a basic ChatGPT prompt as \"moderate\" in terms of comprehensibility, patient safety, and the correct application of communication rules taught during the course. The students' ratings were considerably improved using an extended prompt. The same text, however, showed the smallest increase in treatment expectations when compared with information provided by humans (patient, clinician, and expert) via videos.</p><p><strong>Conclusions: </strong>This study offers valuable insights into integrating the development of AI competencies into a ","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11360267/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manal Kleib, Antonia Arnaert, Lynn M Nagle, Rebecca Sugars, Daniel da Costa
{"title":"Newly Qualified Canadian Nurses' Experiences With Digital Health in the Workplace: Comparative Qualitative Analysis.","authors":"Manal Kleib, Antonia Arnaert, Lynn M Nagle, Rebecca Sugars, Daniel da Costa","doi":"10.2196/53258","DOIUrl":"10.2196/53258","url":null,"abstract":"<p><strong>Background: </strong>Clinical practice settings have increasingly become dependent on the use of digital or eHealth technologies such as electronic health records. It is vitally important to support nurses in adapting to digitalized health care systems; however, little is known about nursing graduates' experiences as they transition to the workplace.</p><p><strong>Objective: </strong>This study aims to (1) describe newly qualified nurses' experiences with digital health in the workplace, and (2) identify strategies that could help support new graduates' transition and practice with digital health.</p><p><strong>Methods: </strong>An exploratory descriptive qualitative design was used. A total of 14 nurses from Eastern and Western Canada participated in semistructured interviews and data were analyzed using inductive content analysis.</p><p><strong>Results: </strong>Three themes were identified: (1) experiences before becoming a registered nurse, (2) experiences upon joining the workplace, and (3) suggestions for bridging the gap in transition to digital health practice. Findings revealed more similarities than differences between participants with respect to gaps in digital health education, technology-related challenges, and their influence on nursing practice.</p><p><strong>Conclusions: </strong>Digital health is the foundation of contemporary health care; therefore, comprehensive education during nursing school and throughout professional nursing practice, as well as organizational support and policy, are critical pillars. Health systems investing in digital health technologies must create supportive work environments for nurses to thrive in technologically rich environments and increase their capacity to deliver the digital health future.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11369539/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Friederike Holderried, Christian Stegemann-Philipps, Anne Herrmann-Werner, Teresa Festl-Wietek, Martin Holderried, Carsten Eickhoff, Moritz Mahling
{"title":"A Language Model-Powered Simulated Patient With Automated Feedback for History Taking: Prospective Study.","authors":"Friederike Holderried, Christian Stegemann-Philipps, Anne Herrmann-Werner, Teresa Festl-Wietek, Martin Holderried, Carsten Eickhoff, Moritz Mahling","doi":"10.2196/59213","DOIUrl":"10.2196/59213","url":null,"abstract":"<p><strong>Background: </strong>Although history taking is fundamental for diagnosing medical conditions, teaching and providing feedback on the skill can be challenging due to resource constraints. Virtual simulated patients and web-based chatbots have thus emerged as educational tools, with recent advancements in artificial intelligence (AI) such as large language models (LLMs) enhancing their realism and potential to provide feedback.</p><p><strong>Objective: </strong>In our study, we aimed to evaluate the effectiveness of a Generative Pretrained Transformer (GPT) 4 model to provide structured feedback on medical students' performance in history taking with a simulated patient.</p><p><strong>Methods: </strong>We conducted a prospective study involving medical students performing history taking with a GPT-powered chatbot. To that end, we designed a chatbot to simulate patients' responses and provide immediate feedback on the comprehensiveness of the students' history taking. Students' interactions with the chatbot were analyzed, and feedback from the chatbot was compared with feedback from a human rater. We measured interrater reliability and performed a descriptive analysis to assess the quality of feedback.</p><p><strong>Results: </strong>Most of the study's participants were in their third year of medical school. A total of 1894 question-answer pairs from 106 conversations were included in our analysis. GPT-4's role-play and responses were medically plausible in more than 99% of cases. Interrater reliability between GPT-4 and the human rater showed \"almost perfect\" agreement (Cohen κ=0.832). Less agreement (κ<0.6) detected for 8 out of 45 feedback categories highlighted topics about which the model's assessments were overly specific or diverged from human judgement.</p><p><strong>Conclusions: </strong>The GPT model was effective in providing structured feedback on history-taking dialogs provided by medical students. Although we unraveled some limitations regarding the specificity of feedback for certain feedback categories, the overall high agreement with human raters suggests that LLMs can be a valuable tool for medical education. Our findings, thus, advocate the careful integration of AI-driven feedback mechanisms in medical training and highlight important aspects when LLMs are used in that context.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reforming China's Secondary Vocational Medical Education: Adapting to the Challenges and Opportunities of the AI Era.","authors":"Wenting Tong, Xiaowen Zhang, Haiping Zeng, Jianping Pan, Chao Gong, Hui Zhang","doi":"10.2196/48594","DOIUrl":"10.2196/48594","url":null,"abstract":"<p><strong>Unlabelled: </strong>China's secondary vocational medical education is essential for training primary health care personnel and enhancing public health responses. This education system currently faces challenges, primarily due to its emphasis on knowledge acquisition that overshadows the development and application of skills, especially in the context of emerging artificial intelligence (AI) technologies. This article delves into the impact of AI on medical practices and uses this analysis to suggest reforms for the vocational medical education system in China. AI is found to significantly enhance diagnostic capabilities, therapeutic decision-making, and patient management. However, it also brings about concerns such as potential job losses and necessitates the adaptation of medical professionals to new technologies. Proposed reforms include a greater focus on critical thinking, hands-on experiences, skill development, medical ethics, and integrating humanities and AI into the curriculum. These reforms require ongoing evaluation and sustained research to effectively prepare medical students for future challenges in the field.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11337726/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geetu Bhandoria, Esra Bilir, Christina Uwins, Josep Vidal-Alaball, Aïna Fuster-Casanovas, Wasim Ahmed
{"title":"Impact of a New Gynecologic Oncology Hashtag During Virtual-Only ASCO Annual Meetings: An X (Twitter) Social Network Analysis.","authors":"Geetu Bhandoria, Esra Bilir, Christina Uwins, Josep Vidal-Alaball, Aïna Fuster-Casanovas, Wasim Ahmed","doi":"10.2196/45291","DOIUrl":"10.2196/45291","url":null,"abstract":"<p><strong>Background: </strong>Official conference hashtags are commonly used to promote tweeting and social media engagement. The reach and impact of introducing a new hashtag during an oncology conference have yet to be studied. The American Society of Clinical Oncology (ASCO) conducts an annual global meeting, which was entirely virtual due to the COVID-19 pandemic in 2020 and 2021.</p><p><strong>Objective: </strong>This study aimed to assess the reach and impact (in the form of vertices and edges generated) and X (formerly Twitter) activity of the new hashtags #goASCO20 and #goASCO21 in the ASCO 2020 and 2021 virtual conferences.</p><p><strong>Methods: </strong>New hashtags (#goASCO20 and #goASCO21) were created for the ASCO virtual conferences in 2020 and 2021 to help focus gynecologic oncology discussion at the ASCO meetings. Data were retrieved using these hashtags (#goASCO20 for 2020 and #goASCO21 for 2021). A social network analysis was performed using the NodeXL software application.</p><p><strong>Results: </strong>The hashtags #goASCO20 and #goASCO21 had similar impacts on the social network. Analysis of the reach and impact of the individual hashtags found #goASCO20 to have 150 vertices and 2519 total edges and #goASCO20 to have 174 vertices and 2062 total edges. Mentions and tweets between 2020 and 2021 were also similar. The circles representing different users were spatially arranged in a more balanced way in 2021. Tweets using the #goASCO21 hashtag received significantly more responses than tweets using #goASCO20 (75 times in 2020 vs 360 times in 2021; z value=16.63 and P<.001). This indicates increased engagement in the subsequent year.</p><p><strong>Conclusions: </strong>Introducing a gynecologic oncology specialty-specific hashtag (#goASCO20 and #goASCO21) that is related but different from the official conference hashtag (#ASCO20 and #ASCO21) helped facilitate discussion on topics of interest to gynecologic oncologists during a virtual pan-oncology meeting. This impact was visible in the social network analysis.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11339558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Influence of Model Evolution and System Roles on ChatGPT's Performance in Chinese Medical Licensing Exams: Comparative Study.","authors":"Shuai Ming, Qingge Guo, Wenjun Cheng, Bo Lei","doi":"10.2196/52784","DOIUrl":"10.2196/52784","url":null,"abstract":"<p><strong>Background: </strong>With the increasing application of large language models like ChatGPT in various industries, its potential in the medical domain, especially in standardized examinations, has become a focal point of research.</p><p><strong>Objective: </strong>The aim of this study is to assess the clinical performance of ChatGPT, focusing on its accuracy and reliability in the Chinese National Medical Licensing Examination (CNMLE).</p><p><strong>Methods: </strong>The CNMLE 2022 question set, consisting of 500 single-answer multiple choices questions, were reclassified into 15 medical subspecialties. Each question was tested 8 to 12 times in Chinese on the OpenAI platform from April 24 to May 15, 2023. Three key factors were considered: the version of GPT-3.5 and 4.0, the prompt's designation of system roles tailored to medical subspecialties, and repetition for coherence. A passing accuracy threshold was established as 60%. The χ2 tests and κ values were employed to evaluate the model's accuracy and consistency.</p><p><strong>Results: </strong>GPT-4.0 achieved a passing accuracy of 72.7%, which was significantly higher than that of GPT-3.5 (54%; P<.001). The variability rate of repeated responses from GPT-4.0 was lower than that of GPT-3.5 (9% vs 19.5%; P<.001). However, both models showed relatively good response coherence, with κ values of 0.778 and 0.610, respectively. System roles numerically increased accuracy for both GPT-4.0 (0.3%-3.7%) and GPT-3.5 (1.3%-4.5%), and reduced variability by 1.7% and 1.8%, respectively (P>.05). In subgroup analysis, ChatGPT achieved comparable accuracy among different question types (P>.05). GPT-4.0 surpassed the accuracy threshold in 14 of 15 subspecialties, while GPT-3.5 did so in 7 of 15 on the first response.</p><p><strong>Conclusions: </strong>GPT-4.0 passed the CNMLE and outperformed GPT-3.5 in key areas such as accuracy, consistency, and medical subspecialty expertise. Adding a system role insignificantly enhanced the model's reliability and answer coherence. GPT-4.0 showed promising potential in medical education and clinical practice, meriting further study.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11336778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141976840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Cherrez-Ojeda, Juan C Gallardo-Bastidas, Karla Robles-Velasco, María F Osorio, Eleonor Maria Velez Leon, Manuel Leon Velastegui, Patrícia Pauletto, F C Aguilar-Díaz, Aldo Squassi, Susana Patricia González Eras, Erita Cordero Carrasco, Karol Leonor Chavez Gonzalez, Juan C Calderon, Jean Bousquet, Anna Bedbrook, Marco Faytong-Haro
{"title":"Understanding Health Care Students' Perceptions, Beliefs, and Attitudes Toward AI-Powered Language Models: Cross-Sectional Study.","authors":"Ivan Cherrez-Ojeda, Juan C Gallardo-Bastidas, Karla Robles-Velasco, María F Osorio, Eleonor Maria Velez Leon, Manuel Leon Velastegui, Patrícia Pauletto, F C Aguilar-Díaz, Aldo Squassi, Susana Patricia González Eras, Erita Cordero Carrasco, Karol Leonor Chavez Gonzalez, Juan C Calderon, Jean Bousquet, Anna Bedbrook, Marco Faytong-Haro","doi":"10.2196/51757","DOIUrl":"10.2196/51757","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT was not intended for use in health care, but it has potential benefits that depend on end-user understanding and acceptability, which is where health care students become crucial. There is still a limited amount of research in this area.</p><p><strong>Objective: </strong>The primary aim of our study was to assess the frequency of ChatGPT use, the perceived level of knowledge, the perceived risks associated with its use, and the ethical issues, as well as attitudes toward the use of ChatGPT in the context of education in the field of health. In addition, we aimed to examine whether there were differences across groups based on demographic variables. The second part of the study aimed to assess the association between the frequency of use, the level of perceived knowledge, the level of risk perception, and the level of perception of ethics as predictive factors for participants' attitudes toward the use of ChatGPT.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted from May to June 2023 encompassing students of medicine, nursing, dentistry, nutrition, and laboratory science across the Americas. The study used descriptive analysis, chi-square tests, and ANOVA to assess statistical significance across different categories. The study used several ordinal logistic regression models to analyze the impact of predictive factors (frequency of use, perception of knowledge, perception of risk, and ethics perception scores) on attitude as the dependent variable. The models were adjusted for gender, institution type, major, and country. Stata was used to conduct all the analyses.</p><p><strong>Results: </strong>Of 2661 health care students, 42.99% (n=1144) were unaware of ChatGPT. The median score of knowledge was \"minimal\" (median 2.00, IQR 1.00-3.00). Most respondents (median 2.61, IQR 2.11-3.11) regarded ChatGPT as neither ethical nor unethical. Most participants (median 3.89, IQR 3.44-4.34) \"somewhat agreed\" that ChatGPT (1) benefits health care settings, (2) provides trustworthy data, (3) is a helpful tool for clinical and educational medical information access, and (4) makes the work easier. In total, 70% (7/10) of people used it for homework. As the perceived knowledge of ChatGPT increased, there was a stronger tendency with regard to having a favorable attitude toward ChatGPT. Higher ethical consideration perception ratings increased the likelihood of considering ChatGPT as a source of trustworthy health care information (odds ratio [OR] 1.620, 95% CI 1.498-1.752), beneficial in medical issues (OR 1.495, 95% CI 1.452-1.539), and useful for medical literature (OR 1.494, 95% CI 1.426-1.564; P<.001 for all results).</p><p><strong>Conclusions: </strong>Over 40% of American health care students (1144/2661, 42.99%) were unaware of ChatGPT despite its extensive use in the health field. Our data revealed the positive attitudes toward ChatGPT and the desire to learn more about it. Medical educators mus","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141971968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Educational Utility of Clinical Vignettes Generated in Japanese by ChatGPT-4: Mixed Methods Study.","authors":"Hiromizu Takahashi, Kiyoshi Shikino, Takeshi Kondo, Akira Komori, Yuji Yamada, Mizue Saita, Toshio Naito","doi":"10.2196/59133","DOIUrl":"10.2196/59133","url":null,"abstract":"<p><strong>Background: </strong>Evaluating the accuracy and educational utility of artificial intelligence-generated medical cases, especially those produced by large language models such as ChatGPT-4 (developed by OpenAI), is crucial yet underexplored.</p><p><strong>Objective: </strong>This study aimed to assess the educational utility of ChatGPT-4-generated clinical vignettes and their applicability in educational settings.</p><p><strong>Methods: </strong>Using a convergent mixed methods design, a web-based survey was conducted from January 8 to 28, 2024, to evaluate 18 medical cases generated by ChatGPT-4 in Japanese. In the survey, 6 main question items were used to evaluate the quality of the generated clinical vignettes and their educational utility, which are information quality, information accuracy, educational usefulness, clinical match, terminology accuracy (TA), and diagnosis difficulty. Feedback was solicited from physicians specializing in general internal medicine or general medicine and experienced in medical education. Chi-square and Mann-Whitney U tests were performed to identify differences among cases, and linear regression was used to examine trends associated with physicians' experience. Thematic analysis of qualitative feedback was performed to identify areas for improvement and confirm the educational utility of the cases.</p><p><strong>Results: </strong>Of the 73 invited participants, 71 (97%) responded. The respondents, primarily male (64/71, 90%), spanned a broad range of practice years (from 1976 to 2017) and represented diverse hospital sizes throughout Japan. The majority deemed the information quality (mean 0.77, 95% CI 0.75-0.79) and information accuracy (mean 0.68, 95% CI 0.65-0.71) to be satisfactory, with these responses being based on binary data. The average scores assigned were 3.55 (95% CI 3.49-3.60) for educational usefulness, 3.70 (95% CI 3.65-3.75) for clinical match, 3.49 (95% CI 3.44-3.55) for TA, and 2.34 (95% CI 2.28-2.40) for diagnosis difficulty, based on a 5-point Likert scale. Statistical analysis showed significant variability in content quality and relevance across the cases (P<.001 after Bonferroni correction). Participants suggested improvements in generating physical findings, using natural language, and enhancing medical TA. The thematic analysis highlighted the need for clearer documentation, clinical information consistency, content relevance, and patient-centered case presentations.</p><p><strong>Conclusions: </strong>ChatGPT-4-generated medical cases written in Japanese possess considerable potential as resources in medical education, with recognized adequacy in quality and accuracy. Nevertheless, there is a notable need for enhancements in the precision and realism of case details. This study emphasizes ChatGPT-4's value as an adjunctive educational tool in the medical field, requiring expert oversight for optimal application.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350316/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141971965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manal Kleib, Antonia Arnaert, Lynn M Nagle, Elizabeth Mirekuwaa Darko, Sobia Idrees, Daniel da Costa, Shamsa Ali
{"title":"Resources to Support Canadian Nurses to Deliver Virtual Care: Environmental Scan.","authors":"Manal Kleib, Antonia Arnaert, Lynn M Nagle, Elizabeth Mirekuwaa Darko, Sobia Idrees, Daniel da Costa, Shamsa Ali","doi":"10.2196/53254","DOIUrl":"10.2196/53254","url":null,"abstract":"<p><strong>Background: </strong>Regulatory and professional nursing associations have an important role in ensuring that nurses provide safe, competent, and ethical care and are capable of adapting to emerging phenomena that influence society and population health needs. Telehealth and more recently virtual care are 2 digital health modalities that have gained momentum during the COVID-19 pandemic. Telehealth refers to telecommunications and digital communication technologies used to deliver health care, support health care provider and patient education, and facilitate self-care. Virtual care facilitates the delivery of health care services via any remote communication between patients and health care providers and among health care providers, either synchronously or asynchronously, through information and communication technologies. Despite nurses' adaptability to delivering virtual care, many have also reported challenges.</p><p><strong>Objective: </strong>This study aims to describe resources about virtual care, digital health, and nursing informatics (ie, practice guidelines and fact sheets) available to Canadian nurses through their regulatory and professional associations.</p><p><strong>Methods: </strong>An environmental scan was conducted between March and July 2023. The websites of nursing regulatory bodies across 13 Canadian provinces and territories and relevant nursing and a few nonnursing professional associations were searched. Data were extracted from the websites of these organizations to map out educational materials, training opportunities, and guidelines made available for nurses to learn and adapt to the ongoing digitalization of the health care system. Information from each source was summarized and analyzed using an inductive content analysis approach to identify categories and themes. The Virtual Health Competency Framework was applied to support the analysis process.</p><p><strong>Results: </strong>Seven themes were identified: (1) types of resources available about virtual care, (2) terminologies used in virtual care resources, (3) currency of virtual care resources identified, (4) requirements for providing virtual care between provinces, (5) resources through professional nursing associations and other relevant organizations, (6) regulatory guidance versus competency in virtual care, and (7) resources about digital health and nursing informatics. Results also revealed that practice guidance for delivering telehealth existed before the COVID-19 pandemic, but it was further expanded during the pandemic. Differences were noted across available resources with respect to terms used (eg, telenursing, telehealth, or virtual care), types of documents (eg, guideline vs fact sheet), and the depth of information shared. Only 2 associations provided comprehensive telenursing practice guidelines. Resources relative to digital health and nursing informatics exist, but variations between provinces were also noted.</p><p><strong>Conclu","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141971967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}