{"title":"Using Project Extension for Community Healthcare Outcomes to Enhance Substance Use Disorder Care in Primary Care: Mixed Methods Study.","authors":"MacKenzie Koester, Rosemary Motz, Ariel Porto, Nikita Reyes Nieves, Karen Ashley","doi":"10.2196/48135","DOIUrl":"10.2196/48135","url":null,"abstract":"<p><strong>Background: </strong>Substance use and overdose deaths make up a substantial portion of injury-related deaths in the United States, with the state of Ohio leading the nation in rates of diagnosed substance use disorder (SUD). Ohio's growing epidemic has indicated a need to improve SUD care in a primary care setting through the engagement of multidisciplinary providers and the use of a comprehensive approach to care.</p><p><strong>Objective: </strong>The purpose of this study was to assess the ability of the Weitzman Extension for Community Healthcare Outcomes (ECHO): Comprehensive Substance Use Disorder Care program to both address and meet 7 series learning objectives and address substances by analyzing (1) the frequency of exposure to the learning objective topics and substance types during case discussions and (2) participants' change in knowledge, self-efficacy, attitudes, and skills related to the treatment of SUDs pre- to postseries. The 7 series learning objective themes included harm reduction, team-based care, behavioral techniques, medication-assisted treatment, trauma-informed care, co-occurring conditions, and social determinants of health.</p><p><strong>Methods: </strong>We used a mixed methods approach using a conceptual content analysis based on series learning objectives and substances and a 2-tailed paired-samples t test of participants' self-reported learner outcomes. The content analysis gauged the frequency and dose of learning objective themes and illicit and nonillicit substances mentioned in participant case presentations and discussions, and the paired-samples t test compared participants' knowledge, self-efficacy, attitudes, and skills associated with learning objectives and medication management of substances from pre- to postseries.</p><p><strong>Results: </strong>The results of the content analysis indicated that 3 learning objective themes-team-based care, harm reduction, and social determinants of health-resulted in the highest frequencies and dose, appearing in 100% (n=22) of case presentations and discussions. Alcohol had the highest frequency and dose among the illicit and nonillicit substances, appearing in 81% (n=18) of case presentations and discussions. The results of the paired-samples t test indicated statistically significant increases in knowledge domain statements related to polysubstance use (P=.02), understanding the approach other disciplines use in SUD care (P=.02), and medication management strategies for nicotine (P=.03) and opioid use disorder (P=.003). Statistically significant increases were observed for 2 self-efficacy domain statements regarding medication management for nicotine (P=.002) and alcohol use disorder (P=.02). Further, 1 statistically significant increase in the skill domain was observed regarding using the stages of change theory in interventions (P=.03).</p><p><strong>Conclusions: </strong>These findings indicate that the ECHO program's content aligned with its stated l","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e48135"},"PeriodicalIF":3.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11019412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne Mainz, Julia Nitsche, Vera Weirauch, Sven Meister
{"title":"Measuring the Digital Competence of Health Professionals: Scoping Review.","authors":"Anne Mainz, Julia Nitsche, Vera Weirauch, Sven Meister","doi":"10.2196/55737","DOIUrl":"10.2196/55737","url":null,"abstract":"<p><strong>Background: </strong>Digital competence is listed as one of the key competences for lifelong learning and is increasing in importance not only in private life but also in professional life. There is consensus within the health care sector that digital competence (or digital literacy) is needed in various professional fields. However, it is still unclear what exactly the digital competence of health professionals should include and how it can be measured.</p><p><strong>Objective: </strong>This scoping review aims to provide an overview of the common definitions of digital literacy in scientific literature in the field of health care and the existing measurement instruments.</p><p><strong>Methods: </strong>Peer-reviewed scientific papers from the last 10 years (2013-2023) in English or German that deal with the digital competence of health care workers in both outpatient and inpatient care were included. The databases ScienceDirect, Scopus, PubMed, EBSCOhost, MEDLINE, OpenAIRE, ERIC, OAIster, Cochrane Library, CAMbase, APA PsycNet, and Psyndex were searched for literature. The review follows the JBI methodology for scoping reviews, and the description of the results is based on the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist.</p><p><strong>Results: </strong>The initial search identified 1682 papers, of which 46 (2.73%) were included in the synthesis. The review results show that there is a strong focus on technical skills and knowledge with regard to both the definitions of digital competence and the measurement tools. A wide range of competences were identified within the analyzed works and integrated into a validated competence model in the areas of technical, methodological, social, and personal competences. The measurement instruments mainly used self-assessment of skills and knowledge as an indicator of competence and differed greatly in their statistical quality.</p><p><strong>Conclusions: </strong>The identified multitude of subcompetences illustrates the complexity of digital competence in health care, and existing measuring instruments are not yet able to reflect this complexity.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e55737"},"PeriodicalIF":3.2,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11015375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study.","authors":"Masao Noda, Takayoshi Ueno, Ryota Koshu, Yuji Takaso, Mari Dias Shimada, Chizu Saito, Hisashi Sugimoto, Hiroaki Fushiki, Makoto Ito, Akihiro Nomura, Tomokazu Yoshizaki","doi":"10.2196/57054","DOIUrl":"10.2196/57054","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence models can learn from medical literature and clinical cases and generate answers that rival human experts. However, challenges remain in the analysis of complex data containing images and diagrams.</p><p><strong>Objective: </strong>This study aims to assess the answering capabilities and accuracy of ChatGPT-4 Vision (GPT-4V) for a set of 100 questions, including image-based questions, from the 2023 otolaryngology board certification examination.</p><p><strong>Methods: </strong>Answers to 100 questions from the 2023 otolaryngology board certification examination, including image-based questions, were generated using GPT-4V. The accuracy rate was evaluated using different prompts, and the presence of images, clinical area of the questions, and variations in the answer content were examined.</p><p><strong>Results: </strong>The accuracy rate for text-only input was, on average, 24.7% but improved to 47.3% with the addition of English translation and prompts (P<.001). The average nonresponse rate for text-only input was 46.3%; this decreased to 2.7% with the addition of English translation and prompts (P<.001). The accuracy rate was lower for image-based questions than for text-only questions across all types of input, with a relatively high nonresponse rate. General questions and questions from the fields of head and neck allergies and nasal allergies had relatively high accuracy rates, which increased with the addition of translation and prompts. In terms of content, questions related to anatomy had the highest accuracy rate. For all content types, the addition of translation and prompts increased the accuracy rate. As for the performance based on image-based questions, the average of correct answer rate with text-only input was 30.4%, and that with text-plus-image input was 41.3% (P=.02).</p><p><strong>Conclusions: </strong>Examination of artificial intelligence's answering capabilities for the otolaryngology board certification examination improves our understanding of its potential and limitations in this field. Although the improvement was noted with the addition of translation and prompts, the accuracy rate for image-based questions was lower than that for text-based questions, suggesting room for improvement in GPT-4V at this stage. Furthermore, text-plus-image input answers a higher rate in image-based questions. Our findings imply the usefulness and potential of GPT-4V in medicine; however, future consideration of safe use methods is needed.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e57054"},"PeriodicalIF":3.6,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11009855/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lena Rettinger, Peter Putz, Lea Aichinger, Susanne Maria Javorszky, Klaus Widhalm, Veronika Ertelt-Bach, Andreas Huber, Sevan Sargis, Lukas Maul, Oliver Radinger, Franz Werner, Sebastian Kuhn
{"title":"Telehealth Education in Allied Health Care and Nursing: Web-Based Cross-Sectional Survey of Students' Perceived Knowledge, Skills, Attitudes, and Experience.","authors":"Lena Rettinger, Peter Putz, Lea Aichinger, Susanne Maria Javorszky, Klaus Widhalm, Veronika Ertelt-Bach, Andreas Huber, Sevan Sargis, Lukas Maul, Oliver Radinger, Franz Werner, Sebastian Kuhn","doi":"10.2196/51112","DOIUrl":"10.2196/51112","url":null,"abstract":"<p><strong>Background: </strong>The COVID-19 pandemic has highlighted the growing relevance of telehealth in health care. Assessing health care and nursing students' telehealth competencies is crucial for its successful integration into education and practice.</p><p><strong>Objective: </strong>We aimed to assess students' perceived telehealth knowledge, skills, attitudes, and experiences. In addition, we aimed to examine students' preferences for telehealth content and teaching methods within their curricula.</p><p><strong>Methods: </strong>We conducted a cross-sectional web-based study in May 2022. A project-specific questionnaire, developed and refined through iterative feedback and face-validity testing, addressed topics such as demographics, personal perceptions, and professional experience with telehealth and solicited input on potential telehealth course content. Statistical analyses were conducted on surveys with at least a 50% completion rate, including descriptive statistics of categorical variables, graphical representation of results, and Kruskal Wallis tests for central tendencies in subgroup analyses.</p><p><strong>Results: </strong>A total of 261 students from 7 bachelor's and 4 master's health care and nursing programs participated in the study. Most students expressed interest in telehealth (180/261, 69% very or rather interested) and recognized its importance in their education (215/261, 82.4% very or rather important). However, most participants reported limited knowledge of telehealth applications concerning their profession (only 7/261, 2.7% stated profound knowledge) and limited active telehealth experience with various telehealth applications (between 18/261, 6.9% and 63/261, 24.1%). Statistically significant differences were found between study programs regarding telehealth interest (P=.005), knowledge (P<.001), perceived importance in education (P<.001), and perceived relevance after the pandemic (P=.004). Practical training with devices, software, and apps and telehealth case examples with various patient groups were perceived as most important for integration in future curricula. Most students preferred both interdisciplinary and program-specific courses.</p><p><strong>Conclusions: </strong>This study emphasizes the need to integrate telehealth into health care education curricula, as students state positive telehealth attitudes but seem to be not adequately prepared for its implementation. To optimally prepare future health professionals for the increasing role of telehealth in practice, the results of this study can be considered when designing telehealth curricula.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51112"},"PeriodicalIF":3.6,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10995793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study.","authors":"Takahiro Nakao, Soichiro Miki, Yuta Nakamura, Tomohiro Kikuchi, Yukihiro Nomura, Shouhei Hanaoka, Takeharu Yoshikawa, Osamu Abe","doi":"10.2196/54393","DOIUrl":"10.2196/54393","url":null,"abstract":"<p><strong>Background: </strong>Previous research applying large language models (LLMs) to medicine was focused on text-based information. Recently, multimodal variants of LLMs acquired the capability of recognizing images.</p><p><strong>Objective: </strong>We aim to evaluate the image recognition capability of generative pretrained transformer (GPT)-4V, a recent multimodal LLM developed by OpenAI, in the medical field by testing how visual information affects its performance to answer questions in the 117th Japanese National Medical Licensing Examination.</p><p><strong>Methods: </strong>We focused on 108 questions that had 1 or more images as part of a question and presented GPT-4V with the same questions under two conditions: (1) with both the question text and associated images and (2) with the question text only. We then compared the difference in accuracy between the 2 conditions using the exact McNemar test.</p><p><strong>Results: </strong>Among the 108 questions with images, GPT-4V's accuracy was 68% (73/108) when presented with images and 72% (78/108) when presented without images (P=.36). For the 2 question categories, clinical and general, the accuracies with and those without images were 71% (70/98) versus 78% (76/98; P=.21) and 30% (3/10) versus 20% (2/10; P≥.99), respectively.</p><p><strong>Conclusions: </strong>The additional information from the images did not significantly improve the performance of GPT-4V in the Japanese National Medical Licensing Examination.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e54393"},"PeriodicalIF":3.6,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10966435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140102547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sharing Digital Health Educational Resources in a One-Stop Shop Portal: Tutorial on the Catalog and Index of Digital Health Teaching Resources (CIDHR) Semantic Search Engine.","authors":"Julien Grosjean, Arriel Benis, Jean-Charles Dufour, Émeline Lejeune, Flavien Disson, Badisse Dahamna, Hélène Cieslik, Romain Léguillon, Matthieu Faure, Frank Dufour, Pascal Staccini, Stéfan Jacques Darmoni","doi":"10.2196/48393","DOIUrl":"10.2196/48393","url":null,"abstract":"<p><strong>Background: </strong>Access to reliable and accurate digital health web-based resources is crucial. However, the lack of dedicated search engines for non-English languages, such as French, is a significant obstacle in this field. Thus, we developed and implemented a multilingual, multiterminology semantic search engine called Catalog and Index of Digital Health Teaching Resources (CIDHR). CIDHR is freely accessible to everyone, with a focus on French-speaking resources. CIDHR has been initiated to provide validated, high-quality content tailored to the specific needs of each user profile, be it students or professionals.</p><p><strong>Objective: </strong>This study's primary aim in developing and implementing the CIDHR is to improve knowledge sharing and spreading in digital health and health informatics and expand the health-related educational community, primarily French speaking but also in other languages. We intend to support the continuous development of initial (ie, bachelor level), advanced (ie, master and doctoral levels), and continuing training (ie, professionals and postgraduate levels) in digital health for health and social work fields. The main objective is to describe the development and implementation of CIDHR. The hypothesis guiding this research is that controlled vocabularies dedicated to medical informatics and digital health, such as the Medical Informatics Multilingual Ontology (MIMO) and the concepts structuring the French National Referential on Digital Health (FNRDH), to index digital health teaching and learning resources, are effectively increasing the availability and accessibility of these resources to medical students and other health care professionals.</p><p><strong>Methods: </strong>First, resource identification is processed by medical librarians from websites and scientific sources preselected and validated by domain experts and surveyed every week. Then, based on MIMO and FNRDH, the educational resources are indexed for each related knowledge domain. The same resources are also tagged with relevant academic and professional experience levels. Afterward, the indexed resources are shared with the digital health teaching and learning community. The last step consists of assessing CIDHR by obtaining informal feedback from users.</p><p><strong>Results: </strong>Resource identification and evaluation processes were executed by a dedicated team of medical librarians, aiming to collect and curate an extensive collection of digital health teaching and learning resources. The resources that successfully passed the evaluation process were promptly included in CIDHR. These resources were diligently indexed (with MIMO and FNRDH) and tagged for the study field and degree level. By October 2023, a total of 371 indexed resources were available on a dedicated portal.</p><p><strong>Conclusions: </strong>CIDHR is a multilingual digital health education semantic search engine and platform that aims to increase the acce","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e48393"},"PeriodicalIF":3.6,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10949124/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a Clinical Simulation Video to Evaluate Multiple Domains of Clinical Competence: Cross-Sectional Study.","authors":"Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Daiki Yokokawa, Yu Yamamoto, Hiroyuki Kobayashi, Taro Shimizu, Yasuharu Tokuda","doi":"10.2196/54401","DOIUrl":"10.2196/54401","url":null,"abstract":"<p><strong>Background: </strong>Medical students in Japan undergo a 2-year postgraduate residency program to acquire clinical knowledge and general medical skills. The General Medicine In-Training Examination (GM-ITE) assesses postgraduate residents' clinical knowledge. A clinical simulation video (CSV) may assess learners' interpersonal abilities.</p><p><strong>Objective: </strong>This study aimed to evaluate the relationship between GM-ITE scores and resident physicians' diagnostic skills by having them watch a CSV and to explore resident physicians' perceptions of the CSV's realism, educational value, and impact on their motivation to learn.</p><p><strong>Methods: </strong>The participants included 56 postgraduate medical residents who took the GM-ITE between January 21 and January 28, 2021; watched the CSV; and then provided a diagnosis. The CSV and GM-ITE scores were compared, and the validity of the simulations was examined using discrimination indices, wherein ≥0.20 indicated high discriminatory power and >0.40 indicated a very good measure of the subject's qualifications. Additionally, we administered an anonymous questionnaire to ascertain participants' views on the realism and educational value of the CSV and its impact on their motivation to learn.</p><p><strong>Results: </strong>Of the 56 participants, 6 (11%) provided the correct diagnosis, and all were from the second postgraduate year. All domains indicated high discriminatory power. The (anonymous) follow-up responses indicated that the CSV format was more suitable than the conventional GM-ITE for assessing clinical competence. The anonymous survey revealed that 12 (52%) participants found the CSV format more suitable than the GM-ITE for assessing clinical competence, 18 (78%) affirmed the realism of the video simulation, and 17 (74%) indicated that the experience increased their motivation to learn.</p><p><strong>Conclusions: </strong>The findings indicated that CSV modules simulating real-world clinical examinations were successful in assessing examinees' clinical competence across multiple domains. The study demonstrated that the CSV not only augmented the assessment of diagnostic skills but also positively impacted learners' motivation, suggesting a multifaceted role for simulation in medical education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e54401"},"PeriodicalIF":3.6,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10940988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Feasibility of Using ChatGPT to Create Just-in-Time Adaptive Physical Activity mHealth Intervention Content: Case Study.","authors":"Amanda Willms, Sam Liu","doi":"10.2196/51426","DOIUrl":"10.2196/51426","url":null,"abstract":"<p><strong>Background: </strong>Achieving physical activity (PA) guidelines' recommendation of 150 minutes of moderate-to-vigorous PA per week has been shown to reduce the risk of many chronic conditions. Despite the overwhelming evidence in this field, PA levels remain low globally. By creating engaging mobile health (mHealth) interventions through strategies such as just-in-time adaptive interventions (JITAIs) that are tailored to an individual's dynamic state, there is potential to increase PA levels. However, generating personalized content can take a long time due to various versions of content required for the personalization algorithms. ChatGPT presents an incredible opportunity to rapidly produce tailored content; however, there is a lack of studies exploring its feasibility.</p><p><strong>Objective: </strong>This study aimed to (1) explore the feasibility of using ChatGPT to create content for a PA JITAI mobile app and (2) describe lessons learned and future recommendations for using ChatGPT in the development of mHealth JITAI content.</p><p><strong>Methods: </strong>During phase 1, we used Pathverse, a no-code app builder, and ChatGPT to develop a JITAI app to help parents support their child's PA levels. The intervention was developed based on the Multi-Process Action Control (M-PAC) framework, and the necessary behavior change techniques targeting the M-PAC constructs were implemented in the app design to help parents support their child's PA. The acceptability of using ChatGPT for this purpose was discussed to determine its feasibility. In phase 2, we summarized the lessons we learned during the JITAI content development process using ChatGPT and generated recommendations to inform future similar use cases.</p><p><strong>Results: </strong>In phase 1, by using specific prompts, we efficiently generated content for 13 lessons relating to increasing parental support for their child's PA following the M-PAC framework. It was determined that using ChatGPT for this case study to develop PA content for a JITAI was acceptable. In phase 2, we summarized our recommendations into the following six steps when using ChatGPT to create content for mHealth behavior interventions: (1) determine target behavior, (2) ground the intervention in behavior change theory, (3) design the intervention structure, (4) input intervention structure and behavior change constructs into ChatGPT, (5) revise the ChatGPT response, and (6) customize the response to be used in the intervention.</p><p><strong>Conclusions: </strong>ChatGPT offers a remarkable opportunity for rapid content creation in the context of an mHealth JITAI. Although our case study demonstrated that ChatGPT was acceptable, it is essential to approach its use, along with other language models, with caution. Before delivering content to population groups, expert review is crucial to ensure accuracy and relevancy. Future research and application of these guidelines are imperative as we deepen our unde","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51426"},"PeriodicalIF":3.6,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10940976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using ChatGPT-Like Solutions to Bridge the Communication Gap Between Patients With Rheumatoid Arthritis and Health Care Professionals.","authors":"Chih-Wei Chen, Paul Walter, James Cheng-Chung Wei","doi":"10.2196/48989","DOIUrl":"10.2196/48989","url":null,"abstract":"<p><p>The communication gap between patients and health care professionals has led to increased disputes and resource waste in the medical domain. The development of artificial intelligence and other technologies brings new possibilities to solve this problem. This viewpoint paper proposes a new relationship between patients and health care professionals-\"shared decision-making\"-allowing both sides to obtain a deeper understanding of the disease and reach a consensus during diagnosis and treatment. Then, this paper discusses the important impact of ChatGPT-like solutions in treating rheumatoid arthritis using methotrexate from clinical and patient perspectives. For clinical professionals, ChatGPT-like solutions could provide support in disease diagnosis, treatment, and clinical trials, but attention should be paid to privacy, confidentiality, and regulatory norms. For patients, ChatGPT-like solutions allow easy access to massive amounts of information; however, the information should be carefully managed to ensure safe and effective care. To ensure the effective application of ChatGPT-like solutions in improving the relationship between patients and health care professionals, it is essential to establish a comprehensive database and provide legal, ethical, and other support. Above all, ChatGPT-like solutions could benefit patients and health care professionals if they ensure evidence-based solutions and data protection and collaborate with regulatory authorities and regulatory evolution.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e48989"},"PeriodicalIF":3.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10933717/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139973849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash
{"title":"Correction: How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment.","authors":"Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash","doi":"10.2196/57594","DOIUrl":"10.2196/57594","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/45312.].</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e57594"},"PeriodicalIF":3.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10933712/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}