{"title":"医学专业本科课程反思性写作评估的信度分析","authors":"D. Soemantri, Rita Mustika, Nadia Greviana","doi":"10.21315/eimj2022.14.1.8","DOIUrl":null,"url":null,"abstract":"Reflective writing is increasingly being used in the teaching of professionalism. Because assessment enhances the learning process, effective evaluation of students’ reflective writing is needed. The aim of this study was to examine the inter-rater agreement between two different reflective writing assessment rubrics, which categorised reflective writings into four level of reflection, in an undergraduate medical professionalism course. The reflective writing assignments from 63 medical students enrolled in the 2017 medical professionalism course in the Faculty of Medicine Universitas Indonesia were randomly selected and independently assessed by two raters in September 2019. Intraclass correlation (ICC) analysis (two-way mixed effect, single measure) was carried out to determine the inter-rater agreement of the reflective writing assessment. The less detailed instrument showed a low ICC score of 0.43, which was classified into poor inter-rater agreement, whereas the more detailed rubric showed poor to moderate reliability, with ICC scores of 0.50, 0.50, and 0.36 for the score of each criterion, the total score of each assessed criterion, and the overall score of reflection, respectively. Utilising a more detailed (analytic) rubric to assess students’ reflective writing produced a relatively higher score of interrater reliability, although the reliability achieved using this rubric was still categorised as moderate.","PeriodicalId":130340,"journal":{"name":"Education in Medicine Journal","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Inter-Rater Reliability of Reflective-Writing Assessment in an Undergraduate Professionalism Course in Medical Education\",\"authors\":\"D. Soemantri, Rita Mustika, Nadia Greviana\",\"doi\":\"10.21315/eimj2022.14.1.8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reflective writing is increasingly being used in the teaching of professionalism. Because assessment enhances the learning process, effective evaluation of students’ reflective writing is needed. The aim of this study was to examine the inter-rater agreement between two different reflective writing assessment rubrics, which categorised reflective writings into four level of reflection, in an undergraduate medical professionalism course. The reflective writing assignments from 63 medical students enrolled in the 2017 medical professionalism course in the Faculty of Medicine Universitas Indonesia were randomly selected and independently assessed by two raters in September 2019. Intraclass correlation (ICC) analysis (two-way mixed effect, single measure) was carried out to determine the inter-rater agreement of the reflective writing assessment. The less detailed instrument showed a low ICC score of 0.43, which was classified into poor inter-rater agreement, whereas the more detailed rubric showed poor to moderate reliability, with ICC scores of 0.50, 0.50, and 0.36 for the score of each criterion, the total score of each assessed criterion, and the overall score of reflection, respectively. Utilising a more detailed (analytic) rubric to assess students’ reflective writing produced a relatively higher score of interrater reliability, although the reliability achieved using this rubric was still categorised as moderate.\",\"PeriodicalId\":130340,\"journal\":{\"name\":\"Education in Medicine Journal\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Education in Medicine Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21315/eimj2022.14.1.8\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Education in Medicine Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21315/eimj2022.14.1.8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Inter-Rater Reliability of Reflective-Writing Assessment in an Undergraduate Professionalism Course in Medical Education
Reflective writing is increasingly being used in the teaching of professionalism. Because assessment enhances the learning process, effective evaluation of students’ reflective writing is needed. The aim of this study was to examine the inter-rater agreement between two different reflective writing assessment rubrics, which categorised reflective writings into four level of reflection, in an undergraduate medical professionalism course. The reflective writing assignments from 63 medical students enrolled in the 2017 medical professionalism course in the Faculty of Medicine Universitas Indonesia were randomly selected and independently assessed by two raters in September 2019. Intraclass correlation (ICC) analysis (two-way mixed effect, single measure) was carried out to determine the inter-rater agreement of the reflective writing assessment. The less detailed instrument showed a low ICC score of 0.43, which was classified into poor inter-rater agreement, whereas the more detailed rubric showed poor to moderate reliability, with ICC scores of 0.50, 0.50, and 0.36 for the score of each criterion, the total score of each assessed criterion, and the overall score of reflection, respectively. Utilising a more detailed (analytic) rubric to assess students’ reflective writing produced a relatively higher score of interrater reliability, although the reliability achieved using this rubric was still categorised as moderate.