Tasha Austin, Bharat S. Rawal, Alexandra Diehl, Jonathan Cosme
{"title":"公平的人工智能:揭示高等教育决策中潜在的人类偏见","authors":"Tasha Austin, Bharat S. Rawal, Alexandra Diehl, Jonathan Cosme","doi":"10.5772/acrt.20","DOIUrl":null,"url":null,"abstract":"The purpose of this study is to show how AI can serve as an assessment tool to detect potential human bias in decision making for students in higher education. Using student application data, we conduct a small study and apply a set of algorithms to perform deep learning analyses and assess human behaviors when identifying scholarship recipients. We conduct an interview with the organization’s leaders using this data to understand their criteria and expectations for identifying scholarship recipients and collectively explore the insights uncovered using these algorithms. Upon comparison to those recipients awarded the scholarships, we identify opportunities for the organization to implement a quantitative framework—a repeatable set of algorithms to help identify potential bias before awarding future scholarship recipients. ","PeriodicalId":431659,"journal":{"name":"AI, Computer Science and Robotics Technology","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI for Equity: Unpacking Potential Human Bias in Decision Making in Higher Education\",\"authors\":\"Tasha Austin, Bharat S. Rawal, Alexandra Diehl, Jonathan Cosme\",\"doi\":\"10.5772/acrt.20\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The purpose of this study is to show how AI can serve as an assessment tool to detect potential human bias in decision making for students in higher education. Using student application data, we conduct a small study and apply a set of algorithms to perform deep learning analyses and assess human behaviors when identifying scholarship recipients. We conduct an interview with the organization’s leaders using this data to understand their criteria and expectations for identifying scholarship recipients and collectively explore the insights uncovered using these algorithms. Upon comparison to those recipients awarded the scholarships, we identify opportunities for the organization to implement a quantitative framework—a repeatable set of algorithms to help identify potential bias before awarding future scholarship recipients. \",\"PeriodicalId\":431659,\"journal\":{\"name\":\"AI, Computer Science and Robotics Technology\",\"volume\":\"93 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI, Computer Science and Robotics Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5772/acrt.20\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI, Computer Science and Robotics Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5772/acrt.20","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AI for Equity: Unpacking Potential Human Bias in Decision Making in Higher Education
The purpose of this study is to show how AI can serve as an assessment tool to detect potential human bias in decision making for students in higher education. Using student application data, we conduct a small study and apply a set of algorithms to perform deep learning analyses and assess human behaviors when identifying scholarship recipients. We conduct an interview with the organization’s leaders using this data to understand their criteria and expectations for identifying scholarship recipients and collectively explore the insights uncovered using these algorithms. Upon comparison to those recipients awarded the scholarships, we identify opportunities for the organization to implement a quantitative framework—a repeatable set of algorithms to help identify potential bias before awarding future scholarship recipients.