{"title":"调试在线 CS 课件的包容性:有用吗?","authors":"Amreeta Chatterjee, Rudrajit Choudhuri, Mrinmoy Sarkar, Soumiki Chattopadhyay, Dylan Liu, Samarendra Hedaoo, Margaret Burnett, Anita Sarma","doi":"10.1145/3632620.3671117","DOIUrl":null,"url":null,"abstract":"Online computer science (CS) courses have broadened access to CS education, yet inclusivity barriers persist for minoritized groups in these courses. One problem that recent research has shown is that often inclusivity biases (“inclusivity bugs”) lurk within the course materials themselves, disproportionately disadvantaging minori-tized students. To address this issue, we investigated how a faculty member can use AID—an Automated Inclusivity Detector tool—to remove such inclusivity bugs from a large online CS1 (Intro CS) course and what is the impact of the resulting inclusivity fixes on the students’ experiences. To enable this evaluation, we first needed to (Bugs): investigate inclusivity challenges students face in 5 online CS courses; (Build): build decision rules to capture these challenges in courseware (“inclusivity bugs”) and implement them in the AID tool; (Faculty): investigate how the faculty member followed up on the inclusivity bugs that AID reported; and (Students): investigate how the faculty member’s changes impacted students’ experiences via a before-vs-after qualitative study with CS students. Our results from (Bugs) revealed 39 inclusivity challenges spanning courseware components from the syllabus to assignments. After implementing the rules in the tool (Build), our results from (Faculty) revealed how the faculty member treated AID more as a “peer” than an authority in deciding whether and how to fix the bugs. Finally, the study results with (Students) revealed that students found the after-fix courseware more approachable - feeling less overwhelmed and more in control in contrast to the before-fix version where they constantly felt overwhelmed, often seeking external assistance to understand course content.","PeriodicalId":245617,"journal":{"name":"International Computing Education Research Workshop","volume":"12 5","pages":"419-433"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Debugging for Inclusivity in Online CS Courseware: Does it Work?\",\"authors\":\"Amreeta Chatterjee, Rudrajit Choudhuri, Mrinmoy Sarkar, Soumiki Chattopadhyay, Dylan Liu, Samarendra Hedaoo, Margaret Burnett, Anita Sarma\",\"doi\":\"10.1145/3632620.3671117\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Online computer science (CS) courses have broadened access to CS education, yet inclusivity barriers persist for minoritized groups in these courses. One problem that recent research has shown is that often inclusivity biases (“inclusivity bugs”) lurk within the course materials themselves, disproportionately disadvantaging minori-tized students. To address this issue, we investigated how a faculty member can use AID—an Automated Inclusivity Detector tool—to remove such inclusivity bugs from a large online CS1 (Intro CS) course and what is the impact of the resulting inclusivity fixes on the students’ experiences. To enable this evaluation, we first needed to (Bugs): investigate inclusivity challenges students face in 5 online CS courses; (Build): build decision rules to capture these challenges in courseware (“inclusivity bugs”) and implement them in the AID tool; (Faculty): investigate how the faculty member followed up on the inclusivity bugs that AID reported; and (Students): investigate how the faculty member’s changes impacted students’ experiences via a before-vs-after qualitative study with CS students. Our results from (Bugs) revealed 39 inclusivity challenges spanning courseware components from the syllabus to assignments. After implementing the rules in the tool (Build), our results from (Faculty) revealed how the faculty member treated AID more as a “peer” than an authority in deciding whether and how to fix the bugs. Finally, the study results with (Students) revealed that students found the after-fix courseware more approachable - feeling less overwhelmed and more in control in contrast to the before-fix version where they constantly felt overwhelmed, often seeking external assistance to understand course content.\",\"PeriodicalId\":245617,\"journal\":{\"name\":\"International Computing Education Research Workshop\",\"volume\":\"12 5\",\"pages\":\"419-433\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Computing Education Research Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3632620.3671117\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Computing Education Research Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3632620.3671117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
在线计算机科学(CS)课程拓宽了计算机科学教育的渠道,但对于这些课程中的少数群体来说,包容性障碍依然存在。最近的研究表明,一个问题是,包容性偏见("包容性错误")往往潜伏在课程材料本身中,对少数群体学生造成了极大的不利影响。为了解决这个问题,我们研究了教师如何使用 AID--一种自动包容性检测工具--从大型在线 CS1(CS 入门)课程中删除此类包容性错误,以及由此产生的包容性修正对学生体验的影响。为了进行这项评估,我们首先需要(Bugs):调查学生在 5 门在线 CS 课程中面临的包容性挑战;(Build):建立决策规则以捕捉课件中的这些挑战("包容性 Bugs"),并在 AID 工具中实施这些规则;(Faculty):调查教师如何跟进 AID 报告的包容性 Bug;以及(Students):通过对 CS 学生进行前后对比的定性研究,调查教师的改变如何影响学生的体验。我们从(Bugs)中得出的结果显示,从教学大纲到作业等课件组件中存在 39 个包容性挑战。在工具(构建)中实施规则后,我们对(教师)的研究结果显示,在决定是否和如何修复错误时,教师更多地将 AID 视为 "同行 "而非权威。最后,对(学生)的研究结果表明,学生认为修复后的课件更容易接受--与修复前的版本相比,他们感觉不那么不知所措,更容易掌控,经常寻求外部帮助来理解课程内容。
Debugging for Inclusivity in Online CS Courseware: Does it Work?
Online computer science (CS) courses have broadened access to CS education, yet inclusivity barriers persist for minoritized groups in these courses. One problem that recent research has shown is that often inclusivity biases (“inclusivity bugs”) lurk within the course materials themselves, disproportionately disadvantaging minori-tized students. To address this issue, we investigated how a faculty member can use AID—an Automated Inclusivity Detector tool—to remove such inclusivity bugs from a large online CS1 (Intro CS) course and what is the impact of the resulting inclusivity fixes on the students’ experiences. To enable this evaluation, we first needed to (Bugs): investigate inclusivity challenges students face in 5 online CS courses; (Build): build decision rules to capture these challenges in courseware (“inclusivity bugs”) and implement them in the AID tool; (Faculty): investigate how the faculty member followed up on the inclusivity bugs that AID reported; and (Students): investigate how the faculty member’s changes impacted students’ experiences via a before-vs-after qualitative study with CS students. Our results from (Bugs) revealed 39 inclusivity challenges spanning courseware components from the syllabus to assignments. After implementing the rules in the tool (Build), our results from (Faculty) revealed how the faculty member treated AID more as a “peer” than an authority in deciding whether and how to fix the bugs. Finally, the study results with (Students) revealed that students found the after-fix courseware more approachable - feeling less overwhelmed and more in control in contrast to the before-fix version where they constantly felt overwhelmed, often seeking external assistance to understand course content.