Domain Experts' Interpretations of Assessment Bias in a Scaled, Online Computer Science Curriculum

Benjamin Xie, Matthew J. Davidson, Baker Franke, Emily M. McLeod, Min Li, Amy J. Ko
{"title":"Domain Experts' Interpretations of Assessment Bias in a Scaled, Online Computer Science Curriculum","authors":"Benjamin Xie, Matthew J. Davidson, Baker Franke, Emily M. McLeod, Min Li, Amy J. Ko","doi":"10.1145/3430895.3460141","DOIUrl":null,"url":null,"abstract":"Understanding inequity at scale is necessary for designing equitable online learning experiences, but also difficult. Statistical techniques like differential item functioning (DIF) can help identify whether items/questions in an assessment exhibit potential bias by disadvantaging certain groups (e.g. whether item disadvantages woman vs man of equivalent knowledge). While testing companies typically use DIF to identify items to remove, we explored how domain-experts such as curriculum designers could use DIF to better understand how to design instructional materials to better serve students from diverse groups. Using Code.org's online Computer Science Discoveries (CSD) curriculum, we analyzed 139,097 responses from 19,617 students to identify DIF by gender and race in assessment items (e.g. multiple choice questions). Of the 17 items, we identified six that disadvantaged students who reported as female when compared to students who reported as non-binary or male. We also identified that most (13) items disadvantaged AHNP (African/Black, Hispanic/Latinx, Native American/Alaskan Native, Pacific Islander) students compared to WA (white, Asian) students. We then conducted a workshop and interviews with seven curriculum designers and found that they interpreted item bias relative to an intersection of item features and student identity, the broader curriculum, and differing uses for assessments. We interpreted these findings in the broader context of using data on assessment bias to inform domain-experts' efforts to design more equitable learning experiences.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eighth ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430895.3460141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Understanding inequity at scale is necessary for designing equitable online learning experiences, but also difficult. Statistical techniques like differential item functioning (DIF) can help identify whether items/questions in an assessment exhibit potential bias by disadvantaging certain groups (e.g. whether item disadvantages woman vs man of equivalent knowledge). While testing companies typically use DIF to identify items to remove, we explored how domain-experts such as curriculum designers could use DIF to better understand how to design instructional materials to better serve students from diverse groups. Using Code.org's online Computer Science Discoveries (CSD) curriculum, we analyzed 139,097 responses from 19,617 students to identify DIF by gender and race in assessment items (e.g. multiple choice questions). Of the 17 items, we identified six that disadvantaged students who reported as female when compared to students who reported as non-binary or male. We also identified that most (13) items disadvantaged AHNP (African/Black, Hispanic/Latinx, Native American/Alaskan Native, Pacific Islander) students compared to WA (white, Asian) students. We then conducted a workshop and interviews with seven curriculum designers and found that they interpreted item bias relative to an intersection of item features and student identity, the broader curriculum, and differing uses for assessments. We interpreted these findings in the broader context of using data on assessment bias to inform domain-experts' efforts to design more equitable learning experiences.
领域专家对一个规模化在线计算机科学课程中评估偏差的解释
理解大规模的不平等对于设计公平的在线学习体验是必要的,但也很困难。像差异项目功能(DIF)这样的统计技术可以帮助识别评估中的项目/问题是否因对某些群体不利而表现出潜在的偏见(例如,项目是否对知识相同的女性和男性不利)。虽然测试公司通常使用DIF来确定要删除的项目,但我们探索了课程设计师等领域专家如何使用DIF来更好地理解如何设计教学材料,以更好地为来自不同群体的学生服务。使用Code.org的在线计算机科学发现(CSD)课程,我们分析了来自19617名学生的139,097份回复,以在评估项目(例如多项选择题)中按性别和种族确定DIF。在这17个项目中,我们确定了六个报告为女性的弱势学生,与报告为非二元或男性的学生相比。我们还发现,与WA(白人,亚洲人)学生相比,大多数(13)项对AHNP(非洲/黑人,西班牙裔/拉丁裔,美洲原住民/阿拉斯加原住民,太平洋岛民)学生不利。然后,我们对7位课程设计师进行了研讨会和访谈,发现他们将项目偏见解释为项目特征与学生身份、更广泛的课程以及评估的不同用途的交集。我们在更广泛的背景下解释了这些发现,使用评估偏差的数据来为领域专家设计更公平的学习体验的努力提供信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信