{"title":"开发和实施一个评估交互设计学生的标准系统","authors":"E. Oliver, Daniel Hatch","doi":"10.1109/ietc54973.2022.9796983","DOIUrl":null,"url":null,"abstract":"This paper is the continuation and conclusion of a study evaluating students’ creative works by full-time and adjunct faculty (previous study). The study addressed an interaction design program that previously lacked a uniform method for grading student assignments across courses. The first two phases of the study identified and validated issues that encompassed unclear expectations for completing and grading student assignments. Instructors also reported that significant time was required to grade student assignments. An intervention introduced a uniform system of rubrics designed to assess students in first-year interaction design courses. The rubrics integrated course competencies and program outcomes as the assessment criteria. Rubrics provided students and instructors a framework to determine the purpose and requirements of a specific assignment. The data collected included student assignment scores and the participants’ experiences through pre-and post-study surveys and interviews. The data helped answer if uniform grading rubrics can improve student performance based on scores, reduce the time spent grading by instructors, and improve inter-rater reliability amongst faculty members. Students and instructors who used the rubrics noted increased instructor feedback and overall academic achievement. The data revealed a degree of inter-rater reliability across two courses using the rubrics. This study increased the probability that using the same grading system would result in similar assignment scores across different instructors.","PeriodicalId":251518,"journal":{"name":"2022 Intermountain Engineering, Technology and Computing (IETC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Developing & Implementing a System of Rubrics for Assessing Interaction Design Students\",\"authors\":\"E. Oliver, Daniel Hatch\",\"doi\":\"10.1109/ietc54973.2022.9796983\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper is the continuation and conclusion of a study evaluating students’ creative works by full-time and adjunct faculty (previous study). The study addressed an interaction design program that previously lacked a uniform method for grading student assignments across courses. The first two phases of the study identified and validated issues that encompassed unclear expectations for completing and grading student assignments. Instructors also reported that significant time was required to grade student assignments. An intervention introduced a uniform system of rubrics designed to assess students in first-year interaction design courses. The rubrics integrated course competencies and program outcomes as the assessment criteria. Rubrics provided students and instructors a framework to determine the purpose and requirements of a specific assignment. The data collected included student assignment scores and the participants’ experiences through pre-and post-study surveys and interviews. The data helped answer if uniform grading rubrics can improve student performance based on scores, reduce the time spent grading by instructors, and improve inter-rater reliability amongst faculty members. Students and instructors who used the rubrics noted increased instructor feedback and overall academic achievement. The data revealed a degree of inter-rater reliability across two courses using the rubrics. This study increased the probability that using the same grading system would result in similar assignment scores across different instructors.\",\"PeriodicalId\":251518,\"journal\":{\"name\":\"2022 Intermountain Engineering, Technology and Computing (IETC)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Intermountain Engineering, Technology and Computing (IETC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ietc54973.2022.9796983\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Intermountain Engineering, Technology and Computing (IETC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ietc54973.2022.9796983","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Developing & Implementing a System of Rubrics for Assessing Interaction Design Students
This paper is the continuation and conclusion of a study evaluating students’ creative works by full-time and adjunct faculty (previous study). The study addressed an interaction design program that previously lacked a uniform method for grading student assignments across courses. The first two phases of the study identified and validated issues that encompassed unclear expectations for completing and grading student assignments. Instructors also reported that significant time was required to grade student assignments. An intervention introduced a uniform system of rubrics designed to assess students in first-year interaction design courses. The rubrics integrated course competencies and program outcomes as the assessment criteria. Rubrics provided students and instructors a framework to determine the purpose and requirements of a specific assignment. The data collected included student assignment scores and the participants’ experiences through pre-and post-study surveys and interviews. The data helped answer if uniform grading rubrics can improve student performance based on scores, reduce the time spent grading by instructors, and improve inter-rater reliability amongst faculty members. Students and instructors who used the rubrics noted increased instructor feedback and overall academic achievement. The data revealed a degree of inter-rater reliability across two courses using the rubrics. This study increased the probability that using the same grading system would result in similar assignment scores across different instructors.