MedEdPublish (2016)Pub Date : 2023-11-30eCollection Date: 2023-01-01DOI: 10.12688/mep.19498.2
Janse Schermerhorn, Shelby Wilcox, Steven Durning, Joseph Costello, Candace Norton, Holly Meyer
{"title":"Masters in health professions education programs as they choose to represent themselves: A website review.","authors":"Janse Schermerhorn, Shelby Wilcox, Steven Durning, Joseph Costello, Candace Norton, Holly Meyer","doi":"10.12688/mep.19498.2","DOIUrl":"10.12688/mep.19498.2","url":null,"abstract":"<p><strong>Introduction: </strong>In an age of increasingly face-to-face, blended, and online Health Professions Education, students have more choices of institutions at which to study their degree. For an applicant, oftentimes, the first step is to learn more about a program through its website. Websites allow programs to convey their unique voice and to share their mission and values with others such as applicants, researchers, and academics. Additionally, as the number of master in health professions education (MHPE), or equivalent, programs rapidly grows, websites can share the priorities of these programs.</p><p><strong>Methods: </strong>In this study, we conducted a website review of 158 MHPE websites to explore their geographical distributions, missions, educational concentrations, and various programmatic components.</p><p><strong>Results: </strong>We compiled this information and synthesized pertinent aspects, such as program similarities and differences, or highlighted the omission of critical data.</p><p><strong>Conclusions: </strong>Given that websites are often the first point of contact for prospective applicants, curious collaborators, and potential faculty, the digital image of MHPE programs matters. We believe our findings demonstrate opportunities for growth within institutions and assist the field in identifying the priorities of MHPE programs. As programs begin to shape their websites with more intentionality, they can reflect their relative divergence/convergence compared to other programs as they see fit and, therefore, attract individuals to best match this identity. Periodic reviews of the breadth of programs, such as those undergone here, are necessary to capture diversifying goals, and serves to help advance the field of MHPE as a whole.</p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"13 ","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10714103/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138810410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Faculty development for strengthening online teaching capability: a mixed-methods study of what staff want, evaluated with Kirkpatrick's model of teaching effectiveness.","authors":"Rachelle Singleton, Daniela Ruiz Cosignani, Monica Kam, Megan Clune, Amanda Charlton, Tanisha Jowsey","doi":"10.12688/mep.19692.2","DOIUrl":"10.12688/mep.19692.2","url":null,"abstract":"<p><strong>Background: </strong>Globally, tertiary teachers are increasingly being pushed and pulled into online teaching. While most developments in online education have focused on the student perspective, few studies have reported faculty development (FD) initiatives for increasing online teaching capability and confidence from a staff perspective.</p><p><strong>Methods: </strong>We designed and evaluated FD workshops, using five datasets, and the use of H5P software for interactive online teaching. We used educational theory to design our FD (Mayer multimedia principles, active learning) and evaluated our FD initiatives using the Best Evidence Medical Education (BEME) 2006 modified Kirkpatrick levels.</p><p><strong>Results: </strong>Teaching staff reported that Communities of Practice were important for their learning and emotional support. Uptake and deployment of FD skills depended on the interactivity of FD sessions, their timeliness, and sufficient time allocated to attend and implement. Staff who applied FD learning to their online teaching created interactive learning resources. This content was associated with an increase in student grades, and the roll-out of an institutional site-wide H5P license.</p><p><strong>Conclusion: </strong>This paper demonstrates an effective strategy for upskilling and upscaling faculty development. The use of H5P as a teaching tool enhances student learning. For successful FD, we make four recommendations. These are: provide just-in-time learning and allocate time for FD and staff to create online teaching material; foster supportive communities; offer personalized support; and design hands on active learning.</p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"13 ","pages":"127"},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10739185/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139032889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MedEdPublish (2016)Pub Date : 2023-11-21eCollection Date: 2023-01-01DOI: 10.12688/mep.19732.2
Justin Peacock, Andrea Austin, Marina Shapiro, Alexis Battista, Anita Samuel
{"title":"Accelerating medical education with ChatGPT: an implementation guide.","authors":"Justin Peacock, Andrea Austin, Marina Shapiro, Alexis Battista, Anita Samuel","doi":"10.12688/mep.19732.2","DOIUrl":"10.12688/mep.19732.2","url":null,"abstract":"<p><p>Chatbots powered by artificial intelligence have revolutionized many industries and fields of study, including medical education. Medical educators are increasingly asked to perform more administrative, written, and assessment functions with less time and resources. Safe use of chatbots, like ChatGPT, can help medical educators efficiently perform these functions. In this article, we provide medical educators with tips for the implementation of ChatGPT in medical education. Through creativity and careful construction of prompts, medical educators can use these and other implementations of chatbots, like ChatGPT, in their practice.</p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"13 ","pages":"64"},"PeriodicalIF":0.0,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10910173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140029720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isabelle Bosi, Deborah O'Mara, Tyler Clark, Nounu Sarukkali Patabendige, Sean E. Kennedy, Hasantha Gunasekera
{"title":"Associations between item characteristics and statistical performance for paediatric medical student multiple choice assessments","authors":"Isabelle Bosi, Deborah O'Mara, Tyler Clark, Nounu Sarukkali Patabendige, Sean E. Kennedy, Hasantha Gunasekera","doi":"10.12688/mep.19764.1","DOIUrl":"https://doi.org/10.12688/mep.19764.1","url":null,"abstract":"<ns4:p><ns4:bold>Background:</ns4:bold> Multiple choice questions (MCQs) are commonly used in medical student assessments but often prepared by clinicians without formal education qualifications. This study aimed to inform the question writing process by investigating the association between MCQ characteristics and commonly used statistical measures of individual item quality for a paediatric medical term.</ns4:p><ns4:p> <ns4:bold>Methods:</ns4:bold> Item characteristics and statistics for five consecutive annual barrier paediatric medical student assessments (each n=60 items) were examined retrospectively. Items were characterised according to format (single best answer vs. extended matching); stem and option length; vignette presence and whether required to answer the question, inclusion of images/tables; clinical skill assessed; paediatric speciality; clinical relevance/applicability; Bloom’s taxonomy domain and item flaws. For each item, we recorded the facility (proportion of students answering correctly) and point biserial (discrimination).</ns4:p><ns4:p> <ns4:bold>Results:</ns4:bold> Item characteristics significantly positively correlated (p<0.05) with facility were relevant vignette, diagnosis or application items, longer stem length and higher clinical relevance. Recall items (e.g., epidemiology items) were associated with lower facility. Characteristics significantly correlated with higher discrimination were extended matching question (EMQ) format, longer options, diagnostic and subspeciality items. Variation in item characteristics did not predict variation in the facility or point biserial (less than 10% variation explained).</ns4:p><ns4:p> <ns4:bold>Conclusions:</ns4:bold> Our research supports the use of longer items, relevant vignettes, clinically-relevant content, EMQs and diagnostic items for optimising paediatric MCQ assessment quality. Variation in item characteristics explains a small amount of the observed variation in statistical measures of MCQ quality, highlighting the importance of clinical expertise in writing high quality assessments.</ns4:p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"71 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135342433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Practical tips for organizing challenge-based learning in biomedical education","authors":"Farah R. W. Kools, Heleen van Ravenswaaij","doi":"10.12688/mep.19755.1","DOIUrl":"https://doi.org/10.12688/mep.19755.1","url":null,"abstract":"<ns4:p>Challenge-based learning (CBL) in biomedical education can prepare health professionals to handle complex challenges in their work environments through the development and practice of problem-solving skills. This paper provides twelve practical tips for biomedical educators to implement CBL in their education. The intricacies of CBL are explained together with organizational tips, and multiple levels of student support to help students achieve CBL learning goals. Our aim is to promote CBL in biomedical education and to help students acquire valuable skills for post-graduation while working towards solving real societal needs.</ns4:p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"30 35","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135390279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neva Howard, Roger Edwards, Kathy Boutis, Seth Alexander, Martin Pusic
{"title":"Twelve Tips for using Learning Curves in Health Professions Education Research","authors":"Neva Howard, Roger Edwards, Kathy Boutis, Seth Alexander, Martin Pusic","doi":"10.12688/mep.19723.1","DOIUrl":"https://doi.org/10.12688/mep.19723.1","url":null,"abstract":"<ns3:p>Learning curves can be used to design, implement, and evaluate educational interventions. Attention to key aspects of the method can improve the fidelity of this representation of learning as well as its suitability for education and research purposes. This paper addresses when to use a learning curve, which graphical properties to consider, how to use learning curves quantitatively, and how to use observed thresholds to communicate meaning. We also address the associated ethics and policy considerations. We conclude with a best practices checklist for both educators and researchers seeking to use learning curves in their work.</ns3:p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"222 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135476526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Nehme, Rachel Btaiche, Marc Jreij, Jizel Jahjah, George Karam, Anne Belcher
{"title":"Successful implementation of Medical Education Faculty Development Project at Saint George University of Beirut in the immediate post triple blow to Beirut","authors":"Alexandre Nehme, Rachel Btaiche, Marc Jreij, Jizel Jahjah, George Karam, Anne Belcher","doi":"10.12688/mep.19519.2","DOIUrl":"https://doi.org/10.12688/mep.19519.2","url":null,"abstract":"<ns4:p>Background The aim of this study is to explore the efficacy of the Faculty Development Program (FDP) implemented at the Saint George University of Beirut-Faculty of Medicine (SGUB FM) under exceptional circumstances as the triple blow to Beirut. Methods The Faculty Development, directed towards a cohort of 35 faculty members, is composed of two major components: methodology of teaching and techniques of assessment. The Kirkpatrick’s assessment model, in combination with a specifically designed psychological questionnaire, were chosen to assess the effectiveness of the faculty development initiative. Results Results of the different questionnaires were interpreted individually, then through the lens of the psychological questionnaire. A majority of faculty (55%) were significantly affected psychologically by Beirut’s triple blow and 77% of all participants found the workshops to be of excellent quality (Kirkpatrick’s Level I). Moreover, Kirkpatrick’s level II results yielded a 76% mean percentage of correct answers to post-workshops MCQs and a significant improvement in the mean results of the self-assessment questionnaires, administered before and after each workshop. Results also show that the more a trainee is psychologically affected, the less he/she performs as evidenced by a decrease in the satisfaction rate as well as in the score of the cognitive MCQs and of the self-assessment questionnaires. Conclusions This study was able to highlight that significant learning can occur amidst exceptional circumstances like the Beirut triple blow and administration should invest in professional growth to retain its faculty.</ns4:p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"80 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135934358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patricia McWalter, Abdullah AlKhenizan, Marium Ahmad
{"title":"An Evaluation of Mentorship for Hospital Residents in Saudi Arabia: A Qualitative study using Semi-structured Interviews","authors":"Patricia McWalter, Abdullah AlKhenizan, Marium Ahmad","doi":"10.12688/mep.19364.2","DOIUrl":"https://doi.org/10.12688/mep.19364.2","url":null,"abstract":"Background In this study, we explore how doctors in training perceive mentorship and leadership and whether they believed that mentoring influences the development of leadership skills. The study also addressed whether certain leadership styles lend themselves better to mentoring. Methods A qualitative research method was employed in this study and ethical approval was granted by the Research Ethics Committee (REC) at King Faisal Specialist Hospital and Research Centre (KFSH&RC), after which twelve hospital residents were recruited using purposive sampling. Semi-structured interviews were conducted by the authors and thematic data analysis was performed. Results Three themes emerged and were later refined, using Braun and Clarke’s 2006 thematic analysis method: 1. Purpose of mentorship, with sub themes: a. Expectations, b. Perception of mentorship as supervision, and c. The role of mentorship, including informal mentoring in leadership development 2. Role of mentorship in leadership development. 3. Perceptions of a leader, with sub-themes: a. The leader as a manager, b. The leader as a role model, and c. The merits of different leadership styles. Discussion Most of the residents (doctors in training) viewed mentorship in a positive way. However, when the mentor was perceived more as a supervisor, the usefulness of mentoring was less clear. However, they found that informal mentoring would contribute to leadership skills and would inspire them to become leaders themselves. They were likely to be influenced positively when they saw the leader as a role model, rather than a manager.","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"106 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135809674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toru Yamada, Taro Minami, Yuka Kitano, Shunpei Yoshino, Suguru Mabuchi, Nilam J. Soni
{"title":"Development of a national point-of-care ultrasound training course for physicians in Japan: A 3-year evaluation","authors":"Toru Yamada, Taro Minami, Yuka Kitano, Shunpei Yoshino, Suguru Mabuchi, Nilam J. Soni","doi":"10.12688/mep.19679.1","DOIUrl":"https://doi.org/10.12688/mep.19679.1","url":null,"abstract":"<ns4:p><ns4:bold>Purpose</ns4:bold>: Point-of-care ultrasound (POCUS) allows bedside clinicians to acquire, interpret, and integrate ultrasound images into patient care. Although the availability of POCUS training courses has increased, the educational effectiveness of these courses is unclear.</ns4:p><ns4:p> <ns4:bold>Methods</ns4:bold>: From 2017 to 2019, we investigated the educational effectiveness of a standardized 2-day hands-on POCUS training course and changes in pre- and post-course exam scores in relationship to participants’ (n = 571) clinical rank, years of POCUS experience, and frequency of POCUS use in clinical practice.</ns4:p><ns4:p> <ns4:bold>Results</ns4:bold>: The mean pre- and post-course examination scores were 67.2 (standard deviation [SD] 12.3) and 79.7 (SD 9.7), respectively. Higher pre-course examination scores were associated with higher clinical rank, more years of POCUS experience, and more frequent POCUS use (p < 0.05). All participants showed significant changes in pre- to post-course exam scores. Though pre-course scores differed by clinical rank, POCUS experience, and frequency of POCUS use, differences in post-course scores according to participant baseline differences were non-significant.</ns4:p><ns4:p> <ns4:bold>Conclusion</ns4:bold>: A standardized hands-on POCUS training course is effective for improving POCUS knowledge regardless of baseline differences in clinical rank, POCUS experience, or frequency of POCUS use. Future studies shall evaluate changes in POCUS use in clinical practice after POCUS training.</ns4:p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"2 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135216826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven A. Burr, Thomas Gale, Jolanta Kisielewska, Paul Millin, José M. Pêgo, Gergo Pinter, Iain M. Robinson, Daniel Zahra
{"title":"A narrative review of adaptive testing and its application to medical education","authors":"Steven A. Burr, Thomas Gale, Jolanta Kisielewska, Paul Millin, José M. Pêgo, Gergo Pinter, Iain M. Robinson, Daniel Zahra","doi":"10.12688/mep.19844.1","DOIUrl":"https://doi.org/10.12688/mep.19844.1","url":null,"abstract":"<ns4:p>Adaptive testing has a long but largely unrecognized history. The advent of computer-based testing has created new opportunities to incorporate adaptive testing into conventional programmes of study. Relatively recently software has been developed that can automate the delivery of summative assessments that adapt by difficulty or content. Both types of adaptive testing require a large item bank that has been suitably quality assured. Adaptive testing by difficulty enables more reliable evaluation of individual candidate performance, although at the expense of transparency in decision making, and requiring unidirectional navigation. Adaptive testing by content enables reduction in compensation and targeted individual support to enable assurance of performance in all the required outcomes, although at the expense of discovery learning. With both types of adaptive testing, candidates are presented a different set of items to each other, and there is the potential for that to be perceived as unfair. However, when candidates of different abilities receive the same items, they may receive too many they can answer with ease, or too many that are too difficult to answer. Both situations may be considered unfair as neither provides the opportunity to demonstrate what they know. Adapting by difficulty addresses this. Similarly, when everyone is presented with the same items, but answer different items incorrectly, not providing individualized support and opportunity to demonstrate performance in all the required outcomes by revisiting content previously answered incorrectly could also be considered unfair; a point addressed when adapting by content. We review the educational rationale behind the evolution of adaptive testing and consider its inherent strengths and limitations. We explore the continuous pursuit of improvement of examination methodology and how software can facilitate personalized assessment. We highlight how this can serve as a catalyst for learning and refinement of curricula; fostering engagement of learner and educator alike.</ns4:p>","PeriodicalId":74136,"journal":{"name":"MedEdPublish (2016)","volume":"35 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135274095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}