{"title":"儿科医学临床推理的文字一致性检验:医学生的表现和专家小组的可靠性","authors":"A. Morris, D. Campbell","doi":"10.11157/FOHPE.V16I2.65","DOIUrl":null,"url":null,"abstract":"Background: This study aimed to determine the correlation between student performance in clinical reasoning on the Script Concordance Test (SCT) and a modified essay question (MEQ) exam in a paediatric teaching block and to measure the intra-rater reliability of the expert scoring panel. Method: A 65-item assessment was developed using the accepted SCT method and scored against the responses of a panel of 10 general and subspecialty paediatricians. Student scores for the summative modified essay question examination at the end of the child and adolescent health block were compared with the score on the SCT. Intra-expert reliability was measured for the 10 paediatricians on the expert panel. Results: One hundred and two students completed both the SCT and the MEQ examination, with the correlation coefficient indicating moderate correlation (r = 0.46). The weighted Cohen kappa for the paediatricians on the panel ranged from 0.61–0.86, demonstrating good to excellent intra-rater agreement. Conclusion: We found that the MEQ is not a reliable means of measuring clinical reasoning of medical students, with only moderate correlation with the SCT, and that alternative methods such as SCT should be considered. Our finding of high reliability for paediatricians on the scoring panel is the first published using this methodology. It suggests that for lower stakes examinations, there is no need to re-test examiners. We do, however, propose that this simple method of assessing intra-rater reliability should be considered for high-stakes medical student examinations.","PeriodicalId":306686,"journal":{"name":"Focus on health professional education : a multi-disciplinary journal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The script concordance test for clinical reasoning in paediatric medicine: Medical student performance and expert panel reliability\",\"authors\":\"A. Morris, D. Campbell\",\"doi\":\"10.11157/FOHPE.V16I2.65\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: This study aimed to determine the correlation between student performance in clinical reasoning on the Script Concordance Test (SCT) and a modified essay question (MEQ) exam in a paediatric teaching block and to measure the intra-rater reliability of the expert scoring panel. Method: A 65-item assessment was developed using the accepted SCT method and scored against the responses of a panel of 10 general and subspecialty paediatricians. Student scores for the summative modified essay question examination at the end of the child and adolescent health block were compared with the score on the SCT. Intra-expert reliability was measured for the 10 paediatricians on the expert panel. Results: One hundred and two students completed both the SCT and the MEQ examination, with the correlation coefficient indicating moderate correlation (r = 0.46). The weighted Cohen kappa for the paediatricians on the panel ranged from 0.61–0.86, demonstrating good to excellent intra-rater agreement. Conclusion: We found that the MEQ is not a reliable means of measuring clinical reasoning of medical students, with only moderate correlation with the SCT, and that alternative methods such as SCT should be considered. Our finding of high reliability for paediatricians on the scoring panel is the first published using this methodology. It suggests that for lower stakes examinations, there is no need to re-test examiners. We do, however, propose that this simple method of assessing intra-rater reliability should be considered for high-stakes medical student examinations.\",\"PeriodicalId\":306686,\"journal\":{\"name\":\"Focus on health professional education : a multi-disciplinary journal\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Focus on health professional education : a multi-disciplinary journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.11157/FOHPE.V16I2.65\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Focus on health professional education : a multi-disciplinary journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11157/FOHPE.V16I2.65","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The script concordance test for clinical reasoning in paediatric medicine: Medical student performance and expert panel reliability
Background: This study aimed to determine the correlation between student performance in clinical reasoning on the Script Concordance Test (SCT) and a modified essay question (MEQ) exam in a paediatric teaching block and to measure the intra-rater reliability of the expert scoring panel. Method: A 65-item assessment was developed using the accepted SCT method and scored against the responses of a panel of 10 general and subspecialty paediatricians. Student scores for the summative modified essay question examination at the end of the child and adolescent health block were compared with the score on the SCT. Intra-expert reliability was measured for the 10 paediatricians on the expert panel. Results: One hundred and two students completed both the SCT and the MEQ examination, with the correlation coefficient indicating moderate correlation (r = 0.46). The weighted Cohen kappa for the paediatricians on the panel ranged from 0.61–0.86, demonstrating good to excellent intra-rater agreement. Conclusion: We found that the MEQ is not a reliable means of measuring clinical reasoning of medical students, with only moderate correlation with the SCT, and that alternative methods such as SCT should be considered. Our finding of high reliability for paediatricians on the scoring panel is the first published using this methodology. It suggests that for lower stakes examinations, there is no need to re-test examiners. We do, however, propose that this simple method of assessing intra-rater reliability should be considered for high-stakes medical student examinations.