Christine Crumbley, Karen Szauter, Bernard Karnath, Lindsay Sonstein, L Maria Belalcazar, Sidra Qureshi
{"title":"Narrative comments in internal medicine clerkship evaluations: room to grow.","authors":"Christine Crumbley, Karen Szauter, Bernard Karnath, Lindsay Sonstein, L Maria Belalcazar, Sidra Qureshi","doi":"10.1080/10872981.2025.2471434","DOIUrl":null,"url":null,"abstract":"<p><p>The use of narrative comments in medical education poses a unique challenge: comments are intended to provide formative feedback to learners while also being used for summative grades. Given student and internal medicine (IM) grading committee concerns about narrative comment quality, we offered an interactive IM Grand Rounds (GR) session aimed at improving comment quality. We undertook this study to determine the quality of comments submitted by faculty and post-graduate trainees on students' IM Clerkship clinical assessments, and to explore the potential impact of our IM-GR. Archived comments from clerkship cohorts prior to and immediately following IM-GR were reviewed. Clinical clerkship assessment comments include three sections: Medical Student Performance Assessment (MSPE), Areas of Strength, and Areas for Improvement. We adapted a previously published comment assessment tool and identified the performance domain(s) discussed, inclusion of specific examples of student performance, evidence that the comment was based on direct observations, and, when applicable, the inclusion of actionable recommendations. Scoring was based on the number of domains represented and whether an example within that domain was provided (maximum score = 10). Analysis included descriptive statistics, t-test, and Pearson correlation coefficients. We scored 697 comments. Overall, section ratings were MSPE 2.51 (SD 1.52, range 0-9), Areas of Strength 1.53 (SD 1.09, range 0-6), and Areas for Improvement 1.27 (SD 1.06, range 0-8). Significant differences were noted after Grand Rounds only in the MSPE mean scores. Within domains, trends toward increased use of specific examples in the post-GR narratives were noted. Assessment of both the breadth and depth of the included comments revealed low-quality narratives offered by our faculty and resident instructors. A focused session on best practices in writing narratives offered minimal change in the overall narrative quality, although we did notice a trend toward the inclusion of explanative examples.</p>","PeriodicalId":47656,"journal":{"name":"Medical Education Online","volume":"30 1","pages":"2471434"},"PeriodicalIF":3.1000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11864032/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education Online","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/10872981.2025.2471434","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/25 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
The use of narrative comments in medical education poses a unique challenge: comments are intended to provide formative feedback to learners while also being used for summative grades. Given student and internal medicine (IM) grading committee concerns about narrative comment quality, we offered an interactive IM Grand Rounds (GR) session aimed at improving comment quality. We undertook this study to determine the quality of comments submitted by faculty and post-graduate trainees on students' IM Clerkship clinical assessments, and to explore the potential impact of our IM-GR. Archived comments from clerkship cohorts prior to and immediately following IM-GR were reviewed. Clinical clerkship assessment comments include three sections: Medical Student Performance Assessment (MSPE), Areas of Strength, and Areas for Improvement. We adapted a previously published comment assessment tool and identified the performance domain(s) discussed, inclusion of specific examples of student performance, evidence that the comment was based on direct observations, and, when applicable, the inclusion of actionable recommendations. Scoring was based on the number of domains represented and whether an example within that domain was provided (maximum score = 10). Analysis included descriptive statistics, t-test, and Pearson correlation coefficients. We scored 697 comments. Overall, section ratings were MSPE 2.51 (SD 1.52, range 0-9), Areas of Strength 1.53 (SD 1.09, range 0-6), and Areas for Improvement 1.27 (SD 1.06, range 0-8). Significant differences were noted after Grand Rounds only in the MSPE mean scores. Within domains, trends toward increased use of specific examples in the post-GR narratives were noted. Assessment of both the breadth and depth of the included comments revealed low-quality narratives offered by our faculty and resident instructors. A focused session on best practices in writing narratives offered minimal change in the overall narrative quality, although we did notice a trend toward the inclusion of explanative examples.
期刊介绍:
Medical Education Online is an open access journal of health care education, publishing peer-reviewed research, perspectives, reviews, and early documentation of new ideas and trends.
Medical Education Online aims to disseminate information on the education and training of physicians and other health care professionals. Manuscripts may address any aspect of health care education and training, including, but not limited to:
-Basic science education
-Clinical science education
-Residency education
-Learning theory
-Problem-based learning (PBL)
-Curriculum development
-Research design and statistics
-Measurement and evaluation
-Faculty development
-Informatics/web