{"title":"为什么理解和应对搜索引擎偏见对语言教育者很重要","authors":"Charles Allen Brown","doi":"10.1002/tesj.779","DOIUrl":null,"url":null,"abstract":"<h2>1 INTRODUCTION</h2>\n<p>Computer science research has increasingly documented social group bias in artificial intelligence (AI). Examples include bias against African Americans in software used by courts to determine bail and sentencing decisions (Angwin et al., <span>2016</span>), facial recognition systems performing better on people with lighter skin (Buolamwini & Gebru, <span>2018</span>), and a hiring algorithm penalizing graduates of women's colleges (Silberg & Manyika, <span>2019</span>). Sources for AI bias are complex. They include bias in the initial data used by the AI along with the role of AI algorithms themselves in “amplifying” such initial biases (Ntoutsi et al., <span>2020</span>). Effects of AI bias are commonly seen in search engine results that an AI application many use on a daily basis (Noble, <span>2018</span>).</p>","PeriodicalId":51742,"journal":{"name":"TESOL Journal","volume":"265 1","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Why understanding and responding to search engine bias matters to language educators\",\"authors\":\"Charles Allen Brown\",\"doi\":\"10.1002/tesj.779\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h2>1 INTRODUCTION</h2>\\n<p>Computer science research has increasingly documented social group bias in artificial intelligence (AI). Examples include bias against African Americans in software used by courts to determine bail and sentencing decisions (Angwin et al., <span>2016</span>), facial recognition systems performing better on people with lighter skin (Buolamwini & Gebru, <span>2018</span>), and a hiring algorithm penalizing graduates of women's colleges (Silberg & Manyika, <span>2019</span>). Sources for AI bias are complex. They include bias in the initial data used by the AI along with the role of AI algorithms themselves in “amplifying” such initial biases (Ntoutsi et al., <span>2020</span>). Effects of AI bias are commonly seen in search engine results that an AI application many use on a daily basis (Noble, <span>2018</span>).</p>\",\"PeriodicalId\":51742,\"journal\":{\"name\":\"TESOL Journal\",\"volume\":\"265 1\",\"pages\":\"\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"TESOL Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/tesj.779\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"TESOL Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/tesj.779","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
摘要
计算机科学研究越来越多地记录了人工智能(AI)中的社会群体偏见。例子包括法院在确定保释和量刑决定时使用的软件中对非裔美国人的偏见(Angwin et al., 2016),面部识别系统在肤色较浅的人身上表现更好(Buolamwini &Gebru, 2018),以及一种惩罚女子大学毕业生的招聘算法(Silberg &艾斯曼,2019)。人工智能偏见的来源很复杂。它们包括人工智能使用的初始数据中的偏差,以及人工智能算法本身在“放大”这种初始偏差中的作用(noutsi等人,2020)。人工智能偏差的影响通常出现在人工智能应用程序每天使用的搜索引擎结果中(Noble, 2018)。
Why understanding and responding to search engine bias matters to language educators
1 INTRODUCTION
Computer science research has increasingly documented social group bias in artificial intelligence (AI). Examples include bias against African Americans in software used by courts to determine bail and sentencing decisions (Angwin et al., 2016), facial recognition systems performing better on people with lighter skin (Buolamwini & Gebru, 2018), and a hiring algorithm penalizing graduates of women's colleges (Silberg & Manyika, 2019). Sources for AI bias are complex. They include bias in the initial data used by the AI along with the role of AI algorithms themselves in “amplifying” such initial biases (Ntoutsi et al., 2020). Effects of AI bias are commonly seen in search engine results that an AI application many use on a daily basis (Noble, 2018).
期刊介绍:
TESOL Journal (TJ) is a refereed, practitioner-oriented electronic journal based on current theory and research in the field of TESOL. TJ is a forum for second and foreign language educators at all levels to engage in the ways that research and theorizing can inform, shape, and ground teaching practices and perspectives. Articles enable an active and vibrant professional dialogue about research- and theory-based practices as well as practice-oriented theorizing and research.