聚焦于教学法:书面论证的QR评分标准

Q3 Mathematics
Ruby Daniels, Kathryn Appenzeller Knowles, Emily Naasz, Amanda Lindner
{"title":"聚焦于教学法:书面论证的QR评分标准","authors":"Ruby Daniels, Kathryn Appenzeller Knowles, Emily Naasz, Amanda Lindner","doi":"10.5038/1936-4660.16.1.1431","DOIUrl":null,"url":null,"abstract":"Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.","PeriodicalId":36166,"journal":{"name":"Numeracy","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Focused on Pedagogy: QR Grading Rubrics for Written Arguments\",\"authors\":\"Ruby Daniels, Kathryn Appenzeller Knowles, Emily Naasz, Amanda Lindner\",\"doi\":\"10.5038/1936-4660.16.1.1431\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.\",\"PeriodicalId\":36166,\"journal\":{\"name\":\"Numeracy\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Numeracy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5038/1936-4660.16.1.1431\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numeracy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5038/1936-4660.16.1.1431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0

摘要

定量读写/推理(QL/QR)的机构评估已经在文献中进行了广泛的测试和报道。虽然这些工具适用于在项目或机构层面衡量学生的学习情况,但它们不是为课堂评分而设计的。在修改了一个被广泛接受的用于评估书面论证中QR的制度标准后,当前的混合方法研究测试了两种书面论证QR分析评分标准的可靠性,并探讨了学生对评分工具的反应。59名商科本科学生参与研究。共评估了来自40名学生的415件QR文物;另外19名学生提供了关于评分工具的反馈。新的QR写作标准包括三个主要标准(数字证据、结论和写作),而第二个标准为数据可视化作业增加了第四个标准。在两名编码员对学生的QR作业进行评分后,数据分析发现两种新的QR规则都具有良好的信度。科恩的kappa发现,该研究的评分者在所有评分标准上都有很大的一致性(κ = 0.69至0.80)。QR书写评分标准(α = 0.861)和数据可视化评分标准(α = 0.859)具有良好的内部一致性。当被要求对新的评分工具提供反馈时,89%的学生给出了积极的评价,他们报告说,这些标准明确了作业期望,提高了他们的表现,并促进了写作过程。本文提出了对新大纲写作标准措辞的轻微修改,讨论了在QR课堂中使用大纲的最佳实践,并建议了未来的研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Focused on Pedagogy: QR Grading Rubrics for Written Arguments
Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Numeracy
Numeracy Mathematics-Mathematics (miscellaneous)
CiteScore
1.30
自引率
0.00%
发文量
13
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信