{"title":"评估评审员:调查论文评分过程","authors":"Adam Hasan, Bret Jones","doi":"10.3389/froh.2024.1272692","DOIUrl":null,"url":null,"abstract":"Pressure for accountability, transparency, and consistency of the assessment process is increasing. For assessing complex cognitive achievements, essays are probably the most familiar method, but essay scoring is notoriously unreliable. To address issues of assessment process, accountability, and consistency, this study explores essay marking practice amongst examiners in a UK dental school using a qualitative approach. Think aloud interviews were used to gain insight into how examiners make judgements whilst engaged in marking essays. The issues were multifactorial. These interviews revealed differing interpretations of assessment and corresponding individualised practices which contributed to skewing the outcome when essays were marked. Common to all examiners was the tendency to rank essays rather than adhere to criterion-referencing. Whether examiners mark holistically or analytically, essay marking guides presented a problem to inexperienced examiners, who needed more guidance and seemed reluctant to make definitive judgements. The marking and re-marking of scripts revealed that only 1 of the 9 examiners achieved the same grade category. All examiners awarded different scores corresponding to at least one grade difference; the magnitude of the difference was unrelated to experience examining. This study concludes that in order to improve assessment, there needs to be a shared understanding of standards and of how criteria are to be used for the benefit of staff and students.","PeriodicalId":94016,"journal":{"name":"Frontiers in oral health","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing the assessors: investigating the process of marking essays\",\"authors\":\"Adam Hasan, Bret Jones\",\"doi\":\"10.3389/froh.2024.1272692\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pressure for accountability, transparency, and consistency of the assessment process is increasing. For assessing complex cognitive achievements, essays are probably the most familiar method, but essay scoring is notoriously unreliable. To address issues of assessment process, accountability, and consistency, this study explores essay marking practice amongst examiners in a UK dental school using a qualitative approach. Think aloud interviews were used to gain insight into how examiners make judgements whilst engaged in marking essays. The issues were multifactorial. These interviews revealed differing interpretations of assessment and corresponding individualised practices which contributed to skewing the outcome when essays were marked. Common to all examiners was the tendency to rank essays rather than adhere to criterion-referencing. Whether examiners mark holistically or analytically, essay marking guides presented a problem to inexperienced examiners, who needed more guidance and seemed reluctant to make definitive judgements. The marking and re-marking of scripts revealed that only 1 of the 9 examiners achieved the same grade category. All examiners awarded different scores corresponding to at least one grade difference; the magnitude of the difference was unrelated to experience examining. This study concludes that in order to improve assessment, there needs to be a shared understanding of standards and of how criteria are to be used for the benefit of staff and students.\",\"PeriodicalId\":94016,\"journal\":{\"name\":\"Frontiers in oral health\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-04-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in oral health\",\"FirstCategoryId\":\"0\",\"ListUrlMain\":\"https://doi.org/10.3389/froh.2024.1272692\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in oral health","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.3389/froh.2024.1272692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Assessing the assessors: investigating the process of marking essays
Pressure for accountability, transparency, and consistency of the assessment process is increasing. For assessing complex cognitive achievements, essays are probably the most familiar method, but essay scoring is notoriously unreliable. To address issues of assessment process, accountability, and consistency, this study explores essay marking practice amongst examiners in a UK dental school using a qualitative approach. Think aloud interviews were used to gain insight into how examiners make judgements whilst engaged in marking essays. The issues were multifactorial. These interviews revealed differing interpretations of assessment and corresponding individualised practices which contributed to skewing the outcome when essays were marked. Common to all examiners was the tendency to rank essays rather than adhere to criterion-referencing. Whether examiners mark holistically or analytically, essay marking guides presented a problem to inexperienced examiners, who needed more guidance and seemed reluctant to make definitive judgements. The marking and re-marking of scripts revealed that only 1 of the 9 examiners achieved the same grade category. All examiners awarded different scores corresponding to at least one grade difference; the magnitude of the difference was unrelated to experience examining. This study concludes that in order to improve assessment, there needs to be a shared understanding of standards and of how criteria are to be used for the benefit of staff and students.