{"title":"探索响应技术如何影响讲师反馈:Canvas Speedgrater、Google Docs和Turnitin GradeMark的比较","authors":"Angela Laflen","doi":"10.1016/j.compcom.2023.102777","DOIUrl":null,"url":null,"abstract":"<div><p>There have been few studies examining the variation that exists within modes of feedback: for example, comparing how electronic text feedback created using Google Docs differs from electronic text feedback created using Microsoft Word or how audiovisual feedback created using TechSmith Capture differs from audiovisual feedback created using Screencast-O-Matic. However, the programs that instructors use to create feedback have different affordances, meaning that even within a single mode, the feedback students receive on their writing can vary significantly. To better understand the variation that exists within a single mode, this study investigates how affordances of Canvas Speedgrader, Google Docs, and Turnitin GradeMark impacted electronic text feedback.Based on analysis of 131 feedback files created using the 3 programs, in conjunction with 5 student surveys, and 2 instructor interviews, the study provides insights into how instructor written commentary (location, form, type, focus, and mitigation) varied by program and how participants perceived of feedback provided through the 3 programs. The study...s primary finding is that the affordances of the programs used to create electronic text feedbackresulted in significant differences ininstructorcommentary and instructor and student perceptions of feedback.</p></div>","PeriodicalId":35773,"journal":{"name":"Computers and Composition","volume":"68 ","pages":"Article 102777"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring how response technologies shape instructor feedback: A comparison of Canvas Speedgrader, Google Docs, and Turnitin GradeMark\",\"authors\":\"Angela Laflen\",\"doi\":\"10.1016/j.compcom.2023.102777\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>There have been few studies examining the variation that exists within modes of feedback: for example, comparing how electronic text feedback created using Google Docs differs from electronic text feedback created using Microsoft Word or how audiovisual feedback created using TechSmith Capture differs from audiovisual feedback created using Screencast-O-Matic. However, the programs that instructors use to create feedback have different affordances, meaning that even within a single mode, the feedback students receive on their writing can vary significantly. To better understand the variation that exists within a single mode, this study investigates how affordances of Canvas Speedgrader, Google Docs, and Turnitin GradeMark impacted electronic text feedback.Based on analysis of 131 feedback files created using the 3 programs, in conjunction with 5 student surveys, and 2 instructor interviews, the study provides insights into how instructor written commentary (location, form, type, focus, and mitigation) varied by program and how participants perceived of feedback provided through the 3 programs. The study...s primary finding is that the affordances of the programs used to create electronic text feedbackresulted in significant differences ininstructorcommentary and instructor and student perceptions of feedback.</p></div>\",\"PeriodicalId\":35773,\"journal\":{\"name\":\"Computers and Composition\",\"volume\":\"68 \",\"pages\":\"Article 102777\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers and Composition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S8755461523000282\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Composition","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S8755461523000282","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
Exploring how response technologies shape instructor feedback: A comparison of Canvas Speedgrader, Google Docs, and Turnitin GradeMark
There have been few studies examining the variation that exists within modes of feedback: for example, comparing how electronic text feedback created using Google Docs differs from electronic text feedback created using Microsoft Word or how audiovisual feedback created using TechSmith Capture differs from audiovisual feedback created using Screencast-O-Matic. However, the programs that instructors use to create feedback have different affordances, meaning that even within a single mode, the feedback students receive on their writing can vary significantly. To better understand the variation that exists within a single mode, this study investigates how affordances of Canvas Speedgrader, Google Docs, and Turnitin GradeMark impacted electronic text feedback.Based on analysis of 131 feedback files created using the 3 programs, in conjunction with 5 student surveys, and 2 instructor interviews, the study provides insights into how instructor written commentary (location, form, type, focus, and mitigation) varied by program and how participants perceived of feedback provided through the 3 programs. The study...s primary finding is that the affordances of the programs used to create electronic text feedbackresulted in significant differences ininstructorcommentary and instructor and student perceptions of feedback.
期刊介绍:
Computers and Composition: An International Journal is devoted to exploring the use of computers in writing classes, writing programs, and writing research. It provides a forum for discussing issues connected with writing and computer use. It also offers information about integrating computers into writing programs on the basis of sound theoretical and pedagogical decisions, and empirical evidence. It welcomes articles, reviews, and letters to the Editors that may be of interest to readers, including descriptions of computer-aided writing and/or reading instruction, discussions of topics related to computer use of software development; explorations of controversial ethical, legal, or social issues related to the use of computers in writing programs.