An innovative method for teachers to formatively assess writing online

Sandra Heldsinger, Stephen M. Humphry
{"title":"An innovative method for teachers to formatively assess writing online","authors":"Sandra Heldsinger, Stephen M. Humphry","doi":"10.37517/978-1-74286-685-7-1","DOIUrl":null,"url":null,"abstract":"Assessment is an integral component of effective teaching and a teacher’s professional judgement influences all routine aspects of their work. In the last 20 years, there has been considerable work internationally to support teachers in using assessment to improve student learning. However, there is a pressing issue that impedes teacher professional judgement being exploited to its full potential. The issue relates to teacher assessments in the context of extended performances such as essays and arises from the complexity of obtaining reliable or consistent teacher assessments of students’ work. Literature published in the United States, England and Australia details evidence of low reliability and bias in teacher assessments. As a result, despite policymakers’ willingness to consider making greater use of teachers’ judgements in summative assessment, and thus provide for greater parity of esteem between teachers’ assessment and standardised testing, few gains have been made. While low reliability of scoring is a pressing issue in contexts where the data are used for summative purposes, it also an issue for formative assessment. Inaccurate assessment necessarily impedes the effectiveness of any follow-up activity, and hence the effectiveness of formative assessment. In this session, Dr Sandy Heldsinger and Dr Stephen Humphry will share their research of writing assessment and explain how their research has led to the development of an innovative assessment process that provides the advantages of rubrics, comparative judgements and automated marking with few of the disadvantages.","PeriodicalId":191950,"journal":{"name":"Research Conference 2022: Reimagining assessment: Proceedings and program","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Conference 2022: Reimagining assessment: Proceedings and program","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.37517/978-1-74286-685-7-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Assessment is an integral component of effective teaching and a teacher’s professional judgement influences all routine aspects of their work. In the last 20 years, there has been considerable work internationally to support teachers in using assessment to improve student learning. However, there is a pressing issue that impedes teacher professional judgement being exploited to its full potential. The issue relates to teacher assessments in the context of extended performances such as essays and arises from the complexity of obtaining reliable or consistent teacher assessments of students’ work. Literature published in the United States, England and Australia details evidence of low reliability and bias in teacher assessments. As a result, despite policymakers’ willingness to consider making greater use of teachers’ judgements in summative assessment, and thus provide for greater parity of esteem between teachers’ assessment and standardised testing, few gains have been made. While low reliability of scoring is a pressing issue in contexts where the data are used for summative purposes, it also an issue for formative assessment. Inaccurate assessment necessarily impedes the effectiveness of any follow-up activity, and hence the effectiveness of formative assessment. In this session, Dr Sandy Heldsinger and Dr Stephen Humphry will share their research of writing assessment and explain how their research has led to the development of an innovative assessment process that provides the advantages of rubrics, comparative judgements and automated marking with few of the disadvantages.
一种创新的教师在线写作形成性评估方法
评估是有效教学的一个组成部分,教师的专业判断影响他们工作的所有日常方面。在过去的20年里,国际上已经做了大量的工作来支持教师使用评估来改善学生的学习。然而,有一个紧迫的问题阻碍了教师专业判断的充分发挥。这个问题涉及到在诸如论文等广泛表现的背景下的教师评价,并且由于获得可靠或一致的教师对学生工作的评价的复杂性而产生。发表在美国、英国和澳大利亚的文献详细证明了教师评估的低可靠性和偏见。因此,尽管政策制定者愿意考虑在总结性评估中更多地利用教师的判断,从而在教师的评估和标准化测试之间提供更大的平等尊重,但收效甚微。虽然在数据用于总结目的的情况下,评分的低可靠性是一个紧迫的问题,但它也是形成性评估的问题。不准确的评价必然会妨碍任何后续活动的有效性,因此也会妨碍形成性评价的有效性。在这个环节中,Sandy Heldsinger博士和Stephen Humphry博士将分享他们对写作评估的研究,并解释他们的研究如何导致了一种创新的评估过程的发展,这种评估过程提供了规则,比较判断和自动化评分的优点,而缺点很少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信